Saturday, June 01, 2019

If the answer is yes, should it be mandatory?
Can tracking people through phone-call data improve lives?
After an earthquake tore through Haiti in 2010, killing more than 100,000 people, aid agencies spread across the country to work out where the survivors had fled. But Linus Bengtsson, a graduate student studying global health at the Karolinska Institute in Stockholm, thought he could answer the question from afar. Many Haitians would be using their mobile phones, he reasoned, and those calls would pass through phone towers, which could allow researchers to approximate people’s locations. Bengtsson persuaded Digicel, the biggest phone company in Haiti, to share data from millions of call records from before and after the quake. Digicel replaced the names and phone numbers of callers with random numbers to protect their privacy.
Bengtsson’s idea worked. The analysis wasn’t completed or verified quickly enough to help people in Haiti at the time, but in 2012, he and his collaborators reported that the population of Haiti’s capital, Port-au-Prince, dipped by almost one-quarter soon after the quake, and slowly rose over the next 11 months1. That result aligned with an intensive, on-the-ground survey conducted by the United Nations.
At least 20 mobile-phone companies have donated their proprietary information to such efforts, including operators in 100 countries that back an initiative called Big Data for Social Good, sponsored by the GSMA, an international mobile-phone association. Cash to support the studies has poured in from the UN, the World Bank, the US National Institutes of Health and the Bill & Melinda Gates Foundation in Seattle, Washington. Bengtsson co-founded a non-profit organization in Stockholm called Flowminder that crunches massive call data sets with the aim of saving lives.
Yet as data-for-good projects gain traction, some researchers are asking whether they benefit society enough to outweigh their potential for misuse.

Privacy and Cybersecurity June 2019 Events
June 25-26 National Association of College and University Attorneys
Bret Cohen and Stephanie Gold are presenting at the annual conference of the National Association of College and University Attorneys on the panel, “Focus on GDPR and Other Privacy Laws: How to Develop and Implement a Practical Approach to Compliance.” Bret is also presenting on the panel, “Navigating GDPR Compliance for Research.”
Location: Denver, Colorado

They certainly don’t want to be caught violating privacy.
How the CIA is Working to Ethically Deploy Artificial Intelligence
As the Central Intelligence Agency harnesses machine learning and artificial intelligence to better meet its mission, insiders are aggressively addressing issues around bias and ethics intrinsic to the emerging tech.
We at the agency have over 100 AI initiatives that we are working on and that’s going to continue to be the case,” Benjamin Huebner, the CIA’s privacy and civil liberties officer said Friday at an event hosted by the Brookings Institution in Washington.
… “One of the interesting things about machine learning, which is an aspect of our division of intelligence, is [experts] found in many cases the analytics that have the most accurate results, also have the least explainability—the least ability to explain how the algorithm actually got to the answer it did,” he said. “The algorithm that’s pushing that data out is a black box and that’s a problem if you are the CIA.”
The agency cannot just be accurate, it’s also got to be able to demonstrate how it got to the end result. So if an analytic isn’t explainable, it’s not “decision-ready.”

Interesting look at the economic impact of AI.
Artificial intelligence, the future of work, and inequality
by Daniele Tavani, Colorado State University
One of the most important economic thinkers of all time, John Maynard Keynes, wrote in his 1930 essay "The Economic Possibilities for our Grandchildren" that by the 21st century we could fulfill our needs and wants with a 15 hours workweek and devote the rest of our lives to non-monetary pursuits. Fast-forward to 2014, when the late physicist Stephen Hawking told the BBC that "artificial intelligence could spell the end of the human race."
Economists have debated the effect of technology and automation on jobs for a long time. The first set of questions regards labor displacement and whether there is any future for work at all. The second set of questions has to do with how automation impacts income and wealth inequality.
According to the MIT economist David Autor, between 1989 and 2007 job creation has occurred mostly in low-paying and high-paying jobs, while middle-class jobs were affected by job destruction on net.

We need to think about “things.”
The Internet Of Things Is Powering The Data-Driven Fourth Industrial Revolution
The Fourth Industrial Revolution is data-driven. And a primary reason for this is the rise of the internet of things (IoT). Connected devices from the consumer level to the industrial are creating—and consuming—more data than ever before. Last year, IoT devices outnumbered the world's population for the first time, and by 2021, Gartner predicts that one million new IoT devices will be purchased every hour.
In this Extreme Data Economy, businesses, governments, and organizations need to analyze and react to IoT data simultaneously, in real time. This requires continuous analysis of streaming and historical data, location analysis, and predictive analytics using AI and machine learning.

Good definitions.
Top 5 Programming Languages For Machine Learning
Machine learning has been defined by Andrew Ng, a computer scientist at Stanford University, as “the science of getting computers to act without being explicitly programmed.” It was first conceived in the 1950s, but experienced limited progress until around the turn of the 21st century. Since then, machine learning has been a driving force behind a number of innovations, most notably artificial intelligence.
Machine learning can be broken down into several categories, including supervised,  unsupervised,  semi-supervised and reinforcement learning. While supervised learning relies on labeled input data in order to infer its relationships with output results, unsupervised learning detects patterns among unlabeled input data. Semi-supervised learning employs a combination of both methods, and reinforcement learning motivates programs to repeat or elaborate on processes with desirable outcomes while avoiding errors.
[The languages are: Python, R, JavaScript, C++, and Java.

Another delivery option. (Also reducing unemployment in Columbia?)
Kiwibots win fans at UC Berkeley as they deliver fast food at slow speeds
Four-wheeled, cooler-size Kiwibots are a familiar sight at UC Berkeley as they ferry burritos, Big Macs and bubble tea to students. They’re social media stars, their pictures posted on Instagram, Snapchat and Facebook. Some students dressed up as them for Halloween. After one caught fire due to a battery issue, students held a candlelight vigil for it.
The Kiwibots do not figure out their own routes. Instead, people in Colombia, the home country of Chavez and his two co-founders, plot “waypoints” for the bots to follow, sending them instructions every five to 10 seconds on where to go.
As with other offshoring arrangements, the labor savings are huge. The Colombia workers, who can each handle up to three robots, make less than $2 an hour, which is above the local minimum wage.
Another cost saving is that human assistance means the robots don’t need pricey equipment such as lidar sensors to “see” around them. Manufactured in China and assembled in the U.S., Kiwibots cost only about $2,500 each, Iatsenia said.

I really have trouble understanding the “big is evil” mindset. I’m much more an “evil is evil” kind of guy.
The Justice Department is preparing a potential antitrust investigation of Google
The exact focus of the Justice Department’s investigation is unclear. The department began work on the matter after brokering an agreement with the government’s other antitrust agency, the Federal Trade Commission, to take the lead on antitrust oversight of Google, according to the people familiar with the matter, who spoke on the condition of anonymity because the deliberations are confidential.
Its expansive, data-hungry footprint increasingly has drawn the attention of Democrats and Republicans on Capitol Hill, who say that Google — and some of its peers in Silicon Valley — have become too large and should potentially be broken up. [Would that reduce data collection? Do anything for consumers? Bob]

A package for our Web Development students.

Friday, May 31, 2019

A pretty good bad example. I wonder what their contract with the PoS vendor says about liability?
POS Malware Found at 102 Checkers Restaurant Locations
The popular Checkers and Rally’s drive-through restaurant chain was attacked by Point of Sale (POS) malware impacting 15 percent of its stores across the U.S.
We recently became aware of a data security issue involving malware at certain Checkers and Rally’s locations,” said Checkers on a Wednesday website advisory.
The incident impacted 102 stores Checkers across 20 states – which were all exposed at varying dates, including as early as December 2015 to as recently as April 2019 (a full list of impacted stores is on Checkers’ data breach security advisory page ).

I don’t need to spend much time gathering examples for my Computer Security class.
NY Investigates Exposure of 885 Million Mortgage Documents
New York regulators are investigating a weakness that exposed 885 million mortgage records at First American Financial Corp. as the first test of the state’s strict new cybersecurity regulation. That measure, which went into effect in March 2019 and is considered among the toughest in the nation, requires financial companies to regularly audit and report on how they protect sensitive data, and provides for fines in cases where violations were reckless or willful.
On May 24, KrebsOnSecurity broke the news that First American had just fixed a weakness in its Web site that exposed approximately 885 million documents — many of them with Social Security and bank account numbers — going back at least 16 years. No authentication was needed to access the digitized records.

I doubt they are in a hurry. Let’s see what Brazil and California do…
When Europe first implemented the gold-standard GDPR privacy law, Apple was one of the first companies to pledge to offer similar protections to its customers globally, not just to EU citizens …
However, the company went on to argue that it’s not enough to rely on companies to voluntarily do the right thing and that the US needs its own version of GDPR.
Others have since joined the call, including Microsoft, Google, and even Facebook. This is less surprising than it might seem even for companies where users are the product: it’s better for a company to know ahead of time what it can and can’t do than to make business decisions based on practices which may later be outlawed.
There seem to be three main sticking points. First, ensuring that the law doesn’t place too great a burden on small businesses, who are not as well placed as large companies to absorb compliance costs. Second, disagreement between Republicans and Democrats on the role of the FTC. Third, concern among Democrats in particular that the federal government would be overriding privacy laws already being created at the state level.

A mini-GDPR?
Zack Whittaker reports:
Good news!
Maine lawmakers have passed a bill that will prevent internet providers from selling consumers’ private internet data to advertisers.
The state’s senate unanimously passed the bill 35-0 on Thursday following an earlier vote by state representatives 96-45 in favor of the bill.
Read more on TechCrunch.

Cost? Who cares about cost? My students, for starters.
Understanding the GDPR Cost of Continuous Compliance
Before the new European General Data Protection Regulation (GDPR) went into effect in May 2018, both small- and mid-sized companies and larger enterprises found themselves scrambling to comply with a regulation they found vague and complex, with no clear path to achieving compliance. Now, one year later, we have a much better view of not just the GDPR cost to prepare for the new regulatory environment, but also how much organizations are spending on continuous compliance. A new report from DataGrail, “The Cost of Continuous Compliance,” provides valuable benchmarking data on just how much organizations are spending – both in terms of financial resources and time – in order to keep up with the demands of continuous compliance.

Interesting because it’s not how we normally look at AI. More like Milton Friedman’s pencil.
Alexa, please explain the dark side of artificial intelligence
Last year Kate Crawford, a New York University professor who runs an artificial intelligence research centre, set out to study the “black box” of processes that exist around the hugely popular Amazon Echo device.
Crawford did not do what you might expect when approaching AI – namely, study algorithms, computing systems and suchlike. Instead, she teamed up with Vladan Joler, a Serbian academic, to map the supply chains, raw materials, data and labour that underpin Alexa, the AI agent that Echo’s users talk to.
It was a daunting process – so much so that Joler and Crawford admit that their map, Anatomy of an AI System, is just a first step. The results are both chilling and challenging. For what the map shows is that contemporary western society is blind to the real price of its thirst for technology.
[Anatomy of an AI System:

Seems to indicate we have a long way to go.
In his new novel, Machines Like Me, the novelist Ian McEwan tells the story, set in an alternate history in England in 1982, of a man who buys a humanoid robot.
One of the first things Adam says when he is switched on is “I don’t feel right,” and, typically for cautionary tales about robots, it only gets worse from there.
Based on an archive of ethnographic research on various societies, known as the Human Relations Area Files, the research has revealed seven “plausible candidates for universal moral rules” that are constant among 60 societies randomly chosen around the world, from bands of hunter-gatherers to industrialized nation states. These behaviours were regarded as “uniformly positive,” without exception, in every society studied, from Ojibwa, Tlingit and Copper Inuit in North America, to Somali, Korea, Highland Scots, Serbs, and Lau Fijians internationally.
The rules are: to allocate resources to kin; be loyal to groups; be reciprocal in altruism; be brave, strong, heroic and dominant like a hawk; be humble, subservient, respectful and obedient like a dove; be fair in dividing resources; and recognize property rights.
McEwan’s novel opens with a quotation from a Rudyard Kipling poem about the terrifying promise of the industrial age: “But remember, please, the Law by which we live, / We are not built to comprehend a lie…
The line that follows in Kipling’s poem seems equally grim today, in the age of AI, now that robots threaten to live up to the all the good and evil of human behaviour: “We can neither love nor pity nor forgive. / If you make a slip in handling us you die!
[The Kipling poem:

(Related) Video. 1:31
Standards and Oversight of Artificial Intelligence
The National Institute of Standards and Technology (NIST) and The Information Technology and Innovation Foundation (ITIF) Center for Data Innovation hosted a discussion on setting standards and oversight for artificial intelligence. Among the panelists were representatives from federal agencies working on scientific standards as well as researchers and technology developers working for firms in the artificial intelligence space. They talked about the benefits to setting technological standards early for both private companies and government agencies, and ways the two could work together to expedite standards.

It’s a start. Ethics will be a large part of my Security Compliance class this summer.
SF State launches new certificate in ethical artificial intelligence
Artificial intelligence (AI) has the potential to transform our life and work, but it also raises some thorny ethical questions. That’s why a team of professors from three different colleges at San Francisco State University have created a new graduate certificate program in ethical AI for students who want to gain a broader perspective on autonomous decision-making.
The program is one of just a handful focusing on AI ethics nationally and is unique in its collaborative approach involving the College of Business, Department of Philosophy and Department of Computer Science.
Courses for the certificate will begin this fall with a philosophy class focusing on the idea of responsibility, which will also give some historical context for modern AI and discuss its impacts on labor.
In another course, students will learn about how businesses can act ethically and will consider their responsibility to ensure that technology — for instance, facial recognition — doesn’t interfere with the rights of others.

Thursday, May 30, 2019

A problem my Computer Security students must address.
New Zealand Says Budget Leak Was Bungled, Not Hacked
The Treasury department called in police this week after the opposition National Party released parts of the government's annual budget, which was not due for release until Thursday.
At the time, Treasury Secretary Gabriel Makhlouf said his department had fallen victim to a "systematic" and "deliberate" hack, rejecting "absolutely" any suggestion the information had been accidentally posted online.
He was forced into an embarrassing backdown Thursday after police found no evidence that illegal activity was behind the leak.
"On the available information, an unknown person or persons appear to have exploited a feature in the website search tool but... this does not appear to be unlawful," Makhlouf said in a statement.
He said Treasury prepared a "clone" website ahead of the Budget's release but did not realise that entering specific search terms on it revealed embargoed information. [Did they test it? Bob]

Interesting question. Do you want an employee who can’t learn? I am a fan, but I suspect some lawyers might not be?
Should Failing Phish Tests Be a Fireable Offense?
Would your average Internet user be any more vigilant against phishing scams if he or she faced the real possibility of losing their job after falling for one too many of these emails? Recently, I met someone at a conference who said his employer had in fact terminated employees for such repeated infractions. As this was the first time I’d ever heard of an organization actually doing this, I asked some phishing experts what they thought (spoiler alert: they’re not fans of this particular teaching approach).

Another Computer Security resource. If you misidentify it, you probably won’t secure it properly.
FPF and IAF Release “A Taxonomy of Definitions for the Health Data Ecosystem”
Healthcare technologies are rapidly evolving, producing new data sources, data types, and data uses, which precipitate more rapid and complex data sharing. Novel technologies—such as artificial intelligence tools and new internet of things (IOT) devices and services—are providing benefits to patients, doctors, and researchers.
… Understanding the evolving health data ecosystem presents new challenges for policymakers and industry. There is an increasing need to better understand and document the stakeholders, the emerging data types and their uses.
The Future of Privacy Forum (FPF) and the Information Accountability Foundation (IAF) partnered to form the FPF-IAF Joint Health Initiative in 2018. Today, the Initiative is releasing A Taxonomy of Definitions for the Health Data Ecosystem; the publication is intended to enable a more nuanced, accurate, and common understanding of the current state of the health data ecosystem.
[Read the taxonomy here:

Not a backdoor, it simply removes the wall.
Apple, Google and WhatsApp condemn UK proposal to eavesdrop on encrypted messages
In practice, the proposal suggests a technique which would require encrypted messaging services — such as WhatsApp — to direct a message to a third recipient, at the same time as sending it to its intended user.
In an open letter to GCHQ (Government Communications Headquarters), 47 signatories including Apple, Google and WhatsApp have jointly urged the U.K. cybersecurity agency to abandon its plans for a so-called “ghost protocol.”
It comes after intelligence officials at GCHQ proposed a way in which they believed law enforcement could access end-to-end encrypted communications without undermining the privacy, security or confidence of other users.
The pair said it would be “relatively easy for a service provider to silently add a law enforcement participant to a group chat or call.”
In practice, the proposal suggests a technique which would require encrypted messaging services — such as WhatsApp — to direct a message to a third recipient, at the same time as sending it to its intended user.

You can tell they’ve been following this topic.
GDPR – The Year in Review
Following the one-year anniversary of the coming into effect of the GDPR, Hogan Lovells’ Privacy and Cybersecurity practice has prepared a compilation of key GDPR-related developments of the past 12 months. The compilation covers regulatory guidance, enforcement actions, court proceedings, and various reports and materials.

(Related) When will we hit the tipping point, where the EU goes after these people?
One Year Into GDPR, Most Apps Still Harvest Data Without Permission
While good-acting companies knock themselves out trying to comply with data protection and privacy laws, and regulators debate the minutiae of cookie consent policies, bad actors simply couldn’t care less.
Apps often presented users with a consent notice screen and then ignored the user’s choice, transmitting the data regardless of the user’s preference.
The regulation exists, but is there a body in Belgium looking at the mobile ecosystem to try and determine which calls from a device are legitimate or not – hell no, that’s not happening,” said Grant Simmons, head of client analytics at Kochava.
But even if there was, this stuff is hard to catch by design, Simmons said. Around 30% of the data calls transmitted to and from devices are encrypted and when fraudsters enter the picture, they usually use transitory domains to obscure their actions, including data harvesting.

Hey, it’s a start!
10 things we should all demand from Big Tech right now
We need an algorithmic bill of rights. AI experts helped us write one.
I Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.
VII Redress: We have the right to seek redress if we believe an algorithmic system has unfairly penalized or harmed us.

(Related) A Canadian version?
Canada's Digital Charter: Trust in a digital world
See Canada's Digital Charter and how the Government of Canada is building this foundation of trust and encouraging continued growth across our economy. It relies on governments, citizens and businesses working together to ensure that privacy is protected, data is kept safe, and Canadian companies can lead the world in innovations that fully embrace the benefits of the digital economy.

Will Google become liable for ‘encouraging’ drivers to speed?
Google Maps adds ability to see speed limits and speed traps in 40+ countries

Wednesday, May 29, 2019

Pre-crime? Is refusing the ‘screening’ proof of mental illness?
Joe Cadillic writes:
It has been nearly two years, since I reported on the dangers of creating a law enforcement run Mental Health Assessment (MHA) program. In Texas, police use MHA’s to “screen” every person they have arrested for mental illness.
But the TAPS Act, first introduced in January, would take law enforcement screenings to a whole new level. It would create a national threat assessment of children and adults.
In the course of six months the Threat Assessment, Prevention and Safety (TAPS) Act (H.R. 838) has seen support of the bill grow to nearly 80 Congress members.
Read more on MassPrivateI

Is this a massive privacy breach? I’m not sure.
Elizabeth Hernandez follows up on a story that the Colorado Springs Independent broke last week:
A professor at the University of Colorado’s Colorado Springs campus led a project that secretly snapped photos of more than 1,700 students, faculty members and others walking in public more than six years ago in an effort to enhance facial-recognition technology.
The photographs were posted online as a dataset that could be publicly downloaded from 2016 until this past April.
Read more on the Denver Post.

Until AIs achieve peoplehood.
When algorithms mess up, the nearest human gets the blame
Earlier this month, Bloomberg published an article about an unfolding lawsuit over investments lost by an algorithm. A Hong Kong tycoon lost more than $20 million after entrusting part of his fortune to an automated platform. Without a legal framework to sue the technology, he placed the blame on the nearest human: the man who sold it to him.
It’s the first known case over automated investment losses, but not the first involving the liability of algorithms. In March of 2018, a self-driving Uber struck and killed a pedestrian in Tempe, Arizona, sending another case to court. A year later, Uber was exonerated of all criminal liability, but the safety driver could face charges of vehicular manslaughter instead.
Both cases tackle one of the central questions we face as automated systems trickle into every aspect of society: Who or what deserves the blame when an algorithm causes harm? Who or what actually gets the blame is a different yet equally important question.

Do you think Forbes knows something we don’t?
What If Artificial Intelligence (AI) & Machine Learning (ML) Ruled the World?
What if instead of political parties, presidents, prime ministers, kings, queens, armies, autocrats, and who knows what else, we turned everything over to expert systems? What if we engineered them to be faithful, for example, to one simple principle: "human beings regardless of age, gender, race, origin, religion, location, intelligence, income or wealth, should be treated equally, fairly and consistently"?
Here’s some dialogue – enabled by natural language processing (NLP) – with an expert system named “Decider” that operates from that single principle (you can imagine how it might behave if the principle was completely different – the opposite of equal and fair). The principle is supported by the data and probabilities the system collects and interprets. The “inferences” made by Decider are pre-programmed. In today’s political parlance, Decider is “liberal.” Imagine the one the American TEA Party or Freedom Caucus might engineer – which is the essence of this post: first principles rule.

Keep trying until we get it right (or until AI writes its own?)
Will we ever agree to just one set of rules on the ethical development of artificial intelligence?
Australia is among 42 countries that last week signed up to a new set of policy guidelines for the development of artificial intelligence (AI) systems.
Yet Australia has its own draft guidelines for ethics in AI out for public consultation, and a number of other countries and industry bodies have developed their own AI guidelines.
Responding to these fears and a number of very real problems with narrow AI, the OECD recommendations are the latest of a number of projects and guidelines from governments and other bodies around the world that seek to instil an ethical approach to developing AI.