Tuesday, June 25, 2019


Are the courts thinking about GDPR and similar laws as they make these rulings?
Facebook fails to kill class-action lawsuit over data breach
A proposed class action lawsuit against Facebook will move forward after a judge disagreed with the company’s contention it should not be held liable for failing to protect users’ information.
Facebook last year announced that a data breach allowed hackers to make off with information about some 30 million people. A vulnerability in Facebook’s code enabled outsiders to access to users’ digital access tokens, which make it possible to visit the site without logging in each time.
The company had previously claimed that some of the plaintiffs’ information was not “sensitive” because it was accessible on a public Facebook profile and no real harm had been done because attackers had failed to steal users’ financial information and passwords. Additionally, the company said it should be absolved from responsibility due to the sophistication of the hack.
U.S. District Judge William Alsup disagreed, ruling on June 21 that the evidence-gathering phase of the case should proceed “with alacrity.”
Judge Alsup previously warned Facebook’s legal team he would authorize a “bone-crushing” discovery process on behalf of affected users, according to Law360. Alsup also said user concerns are worth “real money,” rather than “some cosmetic injunctive relief.”
This is one of the many legal matters besieging Facebook. The Silicon Valley giant’s data-sharing deals with technology companies are under criminal investigation, according to the New York Times. Meanwhile, the company is preparing to pay a reported $5 billion to settle a Federal Trade Commission probe into whether it improperly shared information about tens of millions of users with Cambridge Analytica.


(Related)
A Judge Just Ruled You Can Sue The Media Over Facebook Comments From Readers
Dylan Voller, the Aboriginal man who was shown restrained and wearing a spit hood at age 17 in shocking CCTV footage from an adult prison, has been given the green light to sue media companies over Facebook comments written by their readers.
Voller claims a number of comments on the post defamed him by falsely suggesting, among other things, that he "savagely bashed" a Salvation Army officer, causing him serious injury, and that he is a rapist.
These comments were written by readers.
Before Voller's case went to trial, Justice Stephen Rothman considered whether the media companies could be considered liable for the reader comments.
The three companies argued they were not liable during a three-day hearing in February, in which social media managers took the stand and were questioned about how they monitored and moderated Facebook comments.
Rothman ruled in Voller's favour on Monday afternoon, finding that the media companies were the publishers, in a legal sense, of the comments.




The original, please.
Amol Rajan: What kind of internet do you want?
In recent months I have been influenced by a paper on The Geopolitics of Digital Governance by two University of Southampton academics, Kieron O'Hara and Dame Wendy Hall. The paper popularised, but didn't invent, the idea of the "splinternet" - namely, that there is not one internet, but four.
These four internets are, broadly:
the open, universalist version envisioned by the web's pioneers;
the current, largely Californian internet dominated by a few tech giants (Apple, Amazon, Google and Facebook);
a more regulated, European internet; and
an authoritarian, walled-garden approach, of the kind seen in China, which has its own tech giants (Baidu, Alibaba, Tencent).




Philosophy from a psychologist? Not sure I agree with him, but I guess it’s a start.
How to Build Ethical Artificial Intelligence
The field of artificial intelligence is exploding
… Because of the increasing impact of AI on people's lives, concern is growing about how to take a sound ethical approach to future developments. Building ethical artificial intelligence requires both a moral approach to building AI systems and a plan for making AI systems themselves ethical.




How to make money with technology? List and Infographic.
Internet of Things Leads Second Annual Top 10 List from CompTIA Emerging Technology Community
Rankings based on near-term business and financial opportunities for companies working in the business of technology
The Internet of Things (IoT) is the emerging technology that offers the most immediate opportunities to generate new business and revenues, according to the Emerging Technology Community at CompTIA, the leading trade association for the global tech industry.
The community has released its second annual Top 10 Emerging Technologies list, ranked according to the near-term business and financial opportunities the solutions offer to IT channel firms and other companies working in the business of technology.




Big and not-so-big data. Now “Moneyball” is everywhere!
How the Seattle Seahawks use data to win — on and off the field
… The Seahawks were the first NFL franchise to establish a sports science group seven years ago, an effort spearheaded by general manager John Schneider and head coach Pete Carroll.
Fast forward to today, and almost every major league sports team has some type of sports science or analytics arm.
The Seahawks are also using data to improve the fan experience — and, as a result, the team’s bottom line.
For example, fans take surveys that gauge their level of happiness with everything from concession stand options to WiFi connections. Recent results showed complaints about stadium audio issues — but only when the Seahawks created a heat map of the data did they figure out that the issues were relegated to the four corners.
Fans were telling us this information, but we never visualized it,” Dunn said.
It turned out that speakers were never actually installed in those corners when the stadium was built in 2002.
Instead of replacing the stadium’s entire audio system, the team was able to spend a fraction to fix the issues that the data visualization surfaced, and used the saved costs on other more pressing capital expenditures.



Monday, June 24, 2019


Home Security is a real issue.
Walmart and Amazon want to see inside your house. Should you let them?
… Amazon.com Inc. and Walmart Inc., the two largest retailers in the U.S., are now offering to send delivery people inside your house to safely deposit your packages indoors and your groceries inside your fridge.
To calm fears that delivery people might get up to no good, both companies are promising to let you watch the deliveries happen live on video. There’s one catch: Amazon and Walmart get to hang on to that video too.
… risks include having sensitive home data — including voice and video recordings and codes for the front door’s smart lock — hacked and released online, or having the same private data subject to search by law enforcement.
More prosaically, with the advent of computer vision analytics, which offer the ability to extract consumer insights from millions of hours of video, the companies behind these smart-home services could use imagery from your living room to improve their ad targeting or as raw material to train their computer vision algorithms.




For my Security Compliance class.
New York Privacy Act Would Be Considerably Tougher Than California’s Bill
… A patchwork system of state-by-state privacy legislation would require companies to be much more careful and cautious in order to avoid running afoul of any state laws. Law experts are now calling New York State “the next battleground in the fight for state privacy laws.”
there are several notable exceptions between the New York Privacy Act and the CCPA. For example, the New York Privacy Act gives New Yorkers the right to sue companies directly, without waiting for the State Attorney General to take action on their behalf.
… the New York privacy legislation does not impose any minimum sizes on which companies would be covered by the new sweeping legislation. California, by way of contrast, says that companies must have at least $25 million in gross annual revenue in order to fall within the purview of the CCPA. In New York State, a small social media startup with just a few employees and zero revenue would also be expected to follow the full scope and spirit of the New York Privacy Act.
… Under the current interpretation of the New York Privacy Act, businesses must act as “data fiduciaries” when interacting with state residents. In such a way, they would be expected to act much more like attorneys or doctors, which must adhere to very stringent guidelines when it comes to protecting the privacy of citizens.
… The big question is whether large tech companies like Facebook and Google – which have constructed very profitable business models around the idea of trading in personal data – would ever be able to transform into data fiduciaries.




Worth a read.
What Does an AI Ethicist Do?
many organizations are increasingly paying attention to ethical issues around AI. In a 2018 Deloitte survey, 32% of AI-aware executives ranked the ethical risks of AI as one of their top three AI-related concerns. Microsoft and O’Brien are essentially bellwethers on this issue — the figurative sheep at the head of the flock — in creating a role focused on, as O’Brien puts it, AI ethics “advocacy and evangelism.”
I talked with O’Brien to find out how his role came about, what he does in it, and what kinds of policies might emerge at Microsoft because of his work. I also asked him how the AI ethicist role might relate to similar positions at other companies.
... He began to realize, he said, that leaving all of businesses’ ethical and policy issues to the lawyers was not sufficient. O’Brien was seeing that, as data and processing move to the cloud, technology, policy, and geography issues all were starting to collide.




Perspective. What happens when you remove “barriers to entry.”
The number of American taxi drivers has tripled in a decade
The CPS data likely underestimates the true number of Americans who work as part- or full-time taxi drivers and chauffeurs. That’s because the CPS counts US adults who give this occupation as their primary job, while many people who drive for a service like Uber do it as a secondary gig. For context, a 2016 study by Uber and labor economist Alan Krueger found that more than half of drivers worked full time at another job, and 14% worked part time at another job. Lots of those people aren’t being captured by the CPS estimate.




Could be amusing…
New tool by Harvard Law lets people explore language usage in caselaw abajournal.com
ABAJournal: “Parsing 6.7 million federal and state cases and 12 billion words, a new tool allows the public to explore the use of language over 360 years of caselaw. Released [June 19, 2019], “Historical Trends ” was built by the Harvard Law School Library Innovation Lab and is free to use. “I think it’s a good example of a research tool that we can offer that the commercial providers have never been inclined to explore,” says Adam Ziegler, director of the Harvard Law School Library Innovation Lab.
The tool allows a user to explore the use of language in caselaw dating back to the colonial period. A user can track the historical utilization of a word like “privacy,” which was fairly dormant during the 19th and early 20th centuries before receiving much more attention in the 1950s and 1960s. Or, a comparison can be made to see which is more commonly referred to in litigation, such as Harvard or Yale. (Turns out, it’s Yale by a mile.) The tool can also visualize the use of a word across various states, as explained in a blog post by Kelly Fitzpatrick, a research associate at the Library Innovation Lab. For example, Nevada is currently leading the country in cases mentioning the Fifth Amendment, while Iowa has seen a recent uptick in Ninth Amendment mentions, for some reason. The tool is plugged into the repository of cases released last fall as the Caselaw Access Project. Outside of the Library of Congress, it is the most comprehensive database of its kind—totaling 200 terabytes of information…”




Perspective. Watching what could be a multi-day match in five minute chunks?
Cricket World Cup highlights just how big video streaming is in India
More than 100 million users tuned in to Hotstar, an on-demand streaming service owned by Disney, on June 16, the day India and Pakistan played a league match against each other. That’s the highest engagement the four-year-old service has clocked on its platform to date, it said in a statement today.
Hotstar said about 66% of its viewers came from outside of big metro cities
To be sure, these 100 million users are not paying subscribers. Hotstar offers five-minute streaming of live events to users at no cost.




Coming soon to a classroom near me!
The Best Machine Learning Resources
Medium – The Best Machine Learning Resources – “A compendium of resources for crafting a curriculum on artificial intelligence, machine learning, and deep learning



Sunday, June 23, 2019


Conflict short of war. "The US military has jointly defined ‘low intensity conflicts’ as political-military confrontation between contending states or groups below conventional war and above the routine.
Iranian hackers wage cyber campaign amid tensions with U.S.
In recent weeks, hackers believed to be working for the Iranian government have targeted U.S. government agencies, as well as sectors of the economy, including oil and gas, sending waves of spear-phishing emails, according to representatives of cybersecurity companies CrowdStrike and FireEye, which regularly track such activity.
It was not known if any of the hackers managed to gain access to the targeted networks with the emails
… Iran has long targeted the U.S. oil and gas sectors and other critical infrastructure, but those efforts dropped significantly after the nuclear agreement was signed. After President Trump withdrew the U.S. from the deal in May 2018, cyber experts said they have seen an increase in Iranian hacking efforts.
… Yahoo News reported Friday that U.S. Cyber Command launched a retaliatory digital strike against an Iranian spy group on Thursday.


(Related) A different target?
Sources: US Cyber Command launched an offensive cyber strike on Thursday that disabled Iranian computer systems used to control rocket and missile launches


(Related)
US retaliated against Iranian spy group's cyberstrike
US Cyber Command launched a retaliatory cyberstrike last week against an Iranian spy group, according to a US official and a former US intelligence official familiar with the matter.
… The US official added the online strike targeted an Iranian spy group's computer software that was used to track the tankers that were targeted in the Gulf of Oman on June 13.
… The Department of Homeland Security announced Saturday that Iran has recently increased cyberattacks against US industry and government agencies as tension peaked between the countries this week.


(Related) I wonder if they used one of these?
Top 8 Ship Tracking Websites To Track Your Ship Accurately
… Here we have a list of a few prominent websites which are widely used for near-accurate online ship tracking. These vessel trackers provide not only the ship’s location, but also technical and non-technical details, designated routes, and even photographs.




I’m always willing to spend (my time) lavishly on “Free Stuff!”
Free Resources
Find The Most Updated and Free Artificial Intelligence, Machine Learning, Data Science, Deep Learning, Mathematics, Python, R Programming Resources.
[One example: Python Data Science Handbook



Saturday, June 22, 2019


Update. Follow up reports almost always show the hack was larger than originally reported. Why would a camera provider have all this information?
Report: CBP contractor hack was vast, revealed plans for border surveillance
A cyberattack on a subcontractor for U.S. Customs and Border Protection (CBP) exposed surveillance plans and much more than was previously disclosed, according to a new report.
Earlier this month, U.S. Customs and Border Protection said photos of travelers and license plates had been compromised during a cyberattack, adding that less than 100,000 people were affected.
However, the Washington Post reported on Friday that the cyberattack also compromised documents including “detailed schematics, confidential agreements, equipment lists, budget spreadsheets, internal photos and hardware blueprints for security systems.”
The available information taken was “hundreds of gigabytes,” the newspaper reported.




No standard definition of ‘fairness?’
How AI Can Help with the Detection of Financial Crimes
According to Dickie, AI can have a significant impact in data-rich domains where prediction and pattern recognition play an important role. For instance, in areas such as risk assessment and fraud detection in the banking sector, AI can identify aberrations by analyzing past behaviors. But, of course, there are also concerns around issues such as fairness, interpretability, security and privacy.




It gets complicated fast…
An Analysis of the Consequences of the General Data Protection Regulation on Social Network Research
This article examines the principles outlined in the General Data Protection Regulation (GDPR) in the context of social network data. We provide both a practical guide to GDPR-compliant social network data processing, covering aspects such as data collection, consent, anonymization and data analysis, and a broader discussion of the problems emerging when the general principles on which the regulation is based are instantiated to this research area.




Why did you do that, Mr. Terminator?
TED: Teaching AI to Explain its Decisions
Artificial intelligence systems are being increasingly deployed due to their potential to increase the efficiency, scale, consistency, fairness, and accuracy of decisions. However, as many of these systems are opaque in their operation, there is a growing demand for such systems to provide explanations for their decisions. Conventional approaches to this problem attempt to expose or discover the inner workings of a machine learning model with the hope that the resulting explanations will be meaningful to the consumer. In contrast, this paper suggests a new approach to this problem.




Businesses exist to take risks. Lawyers exist to avoid risks?



Friday, June 21, 2019


Computer Security is about making sure this can not happen.
Julie Anderson reports:
An IT error resulted in the deletion of patient records Tuesday from the Creighton University Campus Pharmacy at 2412 Cuming St.
The lost data includes prescription and refill history and insurance information for all customers. A count of customers wasn’t immediately available, but the pharmacy filled 50,000 prescriptions in 2017.
The incident did not involve breach, Creighton officials said. No patient records were stolen; the data was deleted.
However, the loss means that the pharmacy’s database must be rebuilt. All patient data must be re-entered and new prescriptions obtained from physicians.
Read more on Live Well Nebraska.
So… was this/is this pharmacy a HIPAA-covered entity? It would seem that it almost certainly is. So where was its risk assessment? And did they really have no backup?
This may not be a reportable breach under HIPAA and HITECH, but HHS OCR should be auditing them and looking into this if they are covered by HIPAA.


(Related) or this.
Frédéric Tomesco reports:
More than 2.9 million Desjardins Group members have had their personal information compromised in a data breach targeting Canada’s biggest credit union.
The incident stems from “unauthorized and illegal use of internal data” by an employee who has since been fired, Desjardins said Thursday in a statement. Computer systems were not breached, the cooperative said. [But the data was… Bob]
Names, dates of birth, social insurance numbers, addresses and phone numbers of about 2.7 million individual members were released to people outside the organization, Desjardins said. Passwords, security questions and personal identification numbers weren’t compromised, Desjardins stressed. About 173,000 business customers were also affected.
Read more on Montreal Gazette.
The statement from Desjardins Group does not offer any explanation of the former employee’s “ill-intentioned” conduct. Was the employee selling the data to criminals? Were they selling it to spammers? Were they giving it to a competitor? It would be easier to evaluate the risk to individuals if they knew more about the crime itself, I think.




Anyone can (and eventually will) screw up.
Thomas Brewster reports:
Investigators at the FBI and the DHS have failed to conceal minor victims’ identities in court documents where they disclosed a combination of teenagers’ initials and their Facebook identifying numbers—a unique code linked to Facebook accounts. Forbes discovered it was possible to quickly find their real names and other personal information by simply entering the ID number after “facebook.com/”, which led to the minors’ accounts.
In two cases unsealed this month, multiple identities were easily retrievable by simply copying and pasting the Facebook IDs from the court filings into the Web address.
Read more on Forbes.




Auditors are a skeptical bunch.
Cyber Crime Widely Underreported Says ISACA 2019 Annual Report on Cyber Security Trends
The headliner of the most recent part of the cyber security trends report is the underreporting of cyber crime around the globe, which appears to have become normalized. About half of the respondents indicated that they feel that most enterprises do not report all of the cyber crime that they experience, including incidents that they are legally obligated to disclose.
This is taking place in a cyber security landscape in which just under half of the respondents said that cyber attacks had increased in the previous year, and nearly 80% expect to have to contend with a cyber attack on their organization next year. And only a third of the cyber security leaders reported “high” confidence in the ability of their teams to detect and respond to such an attack.




Phishing for people who should know better.
Phishing Campaign Impersonates DHS Alerts
The Cybersecurity and Infrastructure Security Agency (CISA) has issued an alert on a phishing campaign using attachments that impersonate the Department of Homeland Security (DHS).
In an effort to make their attack successful, the phishers spoofed the sender email address to appear as a National Cyber Awareness System (NCAS) alert.
Using social engineering, the attackers then attempt to trick users into clicking the attachments, which were designed to appear as legitimate DHS notifications.
The attachments, however, are malicious, and the purpose of the attack was to lure the targeted recipients into downloading malware onto their systems.
… “CISA will never send NCAS notifications that contain email attachments. Immediately report any suspicious emails to your information technology helpdesk, security office, or email provider,” the alert concludes.




Maybe GDPR hasn’t solved all the problems yet.
Behavioural advertising is out of control, warns UK watchdog
The online behavioural advertising industry is illegally profiling internet users.
That’s the damning assessment of the U.K.’s data protection regulator in an update report published today, in which it sets out major concerns about the programmatic advertising process known as real-time bidding (RTB), which makes up a large chunk of online advertising.
In what sounds like a knock-out blow for highly invasive data-driven ads, the Information Commissioner’s Office (ICO) concludes that systematic profiling of web users via invasive tracking technologies such as cookies is in breach of U.K. and pan-EU privacy laws.
The adtech industry appears immature in its understanding of data protection requirements,” it writes.




I’m shocked, shocked I tell you!
Americans lack trust in social networks’ judgment to remove offensive posts, study finds
In a survey published Wednesday by Pew Research Center, 66 percent of Americans say social networks have a responsibility to delete offensive posts and videos. But determining the threshold for removal has been a tremendous challenge for such companies as Facebook, YouTube and Twitter — exposing them to criticism that they’ve been too slow and reactive to the relentless stream of abusive and objectionable content that populate their platforms.
When it comes to removing offensive material, 45 percent of those surveyed said they did not have much confidence in the companies to decide what content to take down.




More art than science and with strong regional bias?
Medicine contends with how to use artificial intelligence
Artificial intelligence (AI) is poised to upend the practice of medicine, boosting the efficiency and accuracy of diagnosis in specialties that rely on images, like radiology and pathology. But as the technology gallops ahead, experts are grappling with its potential downsides. One major concern: Most AI software is designed and tested in one hospital, and it risks faltering when transferred to another. Last month, in the Journal of the American College of Radiology, U.S. government scientists, regulators, and doctors published a road map describing how to convert research-based AI into software for medical imaging on patients. Among other things, the authors urged more collaboration across disciplines in building and testing AI algorithms and intensive validation of them before they reach patients. Right now, most AI in medicine is used in research, but regulators have already approved some algorithms for radiologists. Many studies are testing algorithms to read x-rays, detect brain bleeds, pinpoint tumors, and more.




Another collection of what ifs…
Death by algorithm: the age of killer robots is closer than you think
Right now, US machine learning and AI is the best in the world, [Debatable at best. Bob] which means that the US military is loath to promise that it will not exploit that advantage on the battlefield. “The US military thinks it’s going to maintain a technical advantage over its opponents,” Walsh told me.
That line of reasoning, experts warn, opens us up to some of the scariest possible scenarios for AI. Many researchers believe that advanced artificial intelligence systems have enormous potential for catastrophic failures — going wrong in ways that humanity cannot correct once we’ve developed them, and (if we screw up badly enough) potentially wiping us out.
In order to avoid that, AI development needs to be open, collaborative, and careful. Researchers should not be conducting critical AI research in secret, where no one can point out their errors. If AI research is collaborative and shared, we are more likely to notice and correct serious problems with advanced AI designs.




Probably, maybe.
The evolution of cognitive architecture will deliver human-like AI
But you can't just slap features together and hope to get an AGI
There's no one right way to build a robot, just as there's no singular means of imparting it with intelligence. Last month, Engadget spoke with Carnegie Mellon University associate research professor and the director of the Resilient Intelligent Systems Lab, Nathan Michael, whose work involves stacking and combining a robot's various piecemeal capabilities together as it learns them into an amalgamated artificial general intelligence (AGI). Think, a Roomba that learns how to vacuum, then learns how to mop, then learns how to dust and do dishes -- pretty soon, you've got Rosie from The Jetsons.




Lawyers are not always concerned about getting things done? I’m shocked!



Thursday, June 20, 2019


Why backups are important.
Florida City Pays $600,000 Ransom to Save Computer Records
The Riviera Beach City Council voted unanimously this week to pay the hackers’ demands, believing the Palm Beach suburb had no choice if it wanted to retrieve its records, which the hackers encrypted. The council already voted to spend almost $1 million on new computers and hardware after hackers captured the city’s system three weeks ago.
The hackers apparently got into the city’s system when an employee clicked on an email link that allowed them to upload malware. Along with the encrypted records, the city had numerous problems including a disabled email system, employees and vendors being paid by check rather than direct deposit and 911 dispatchers being unable to enter calls into the computer.
She conceded there are no guarantees that once the hackers received the money they will release the records. The payment is being covered by insurance.




You might think that government agencies would understand the laws and regulations they operate under. I stopped thinking that years ago.
Government error delays online pornography age-check scheme
An age-check scheme designed to stop under-18s viewing pornographic websites has been delayed a second time.
The changes - which mean UK internet users may have to prove their age - were due to start on 15 July after already being delayed from April 2018.
The culture secretary confirmed the postponement saying the government had failed to tell European regulators about the plan.
Completing the notification process could take up to six months. [So this is not a trivial process. Bob]




Some interesting statements.
Law Libraries Embracing AI
Craigle, Valeri, Law Libraries Embracing AI (2019). Law Librarianship in the Age of AI, (Ellyssa Valenti, Ed.), 2019, Forthcoming; University of Utah College of Law Research Paper. Available at SSRN: https://ssrn.com/abstract=3381798 or http://dx.doi.org/10.2139/ssrn.3381798
The utilization of AI provides insights for legal clients, future-proofs careers for attorneys and law librarians, and elevates the status of the information suite. AI training in law schools makes students more practice-ready in an increasingly tech-centric legal environment; Access to Justice initiatives are embracing AI’s capabilities to provide guidance to educational resources and legal services for the under-represented. AI’s presence in the legal community is becoming so common that it can no longer been seen as an anomaly, or even cutting edge. Some even argue that its absence in law firms will eventually be akin to malpractice. This chapter explores some practical uses of AI in legal education and law firms, with a focus on professionals who have gone beyond the role of AI consumers to that of AI developers, data curators and system designers…”




A field I should encourage my students to consider? It seems to attract a lot of money…
Stephen Schwarzman gives $188 million to Oxford to research AI ethics
Stephen Schwarzman, the billionaire founder of investment firm Blackstone, has given the University of Oxford its largest single donation in hundreds of years to help fund research into the ethics of artificial intelligence.
The £150 million ($188 million) contribution will fund an academic institute bearing the investor's name, the British university announced Wednesday.
The Stephen A. Schwarzman Centre for the Humanities will bring together all of Oxford's humanities programs under one roof — including English, history, linguistics, philosophy and theology and religion. It will also house a new Institute for Ethics in AI, which will focus on studying the ethical implications of artificial intelligence and other new technology. The institute is expected to open by 2024.
He made a $350 million gift to the Massachusetts Institute of Technology last year to set up the MIT Schwarzman College of Computing, which aims to "address the opportunities and challenges presented by the rise of artificial intelligence" including its ethical and policy implications.