Saturday, June 22, 2019


Update. Follow up reports almost always show the hack was larger than originally reported. Why would a camera provider have all this information?
Report: CBP contractor hack was vast, revealed plans for border surveillance
A cyberattack on a subcontractor for U.S. Customs and Border Protection (CBP) exposed surveillance plans and much more than was previously disclosed, according to a new report.
Earlier this month, U.S. Customs and Border Protection said photos of travelers and license plates had been compromised during a cyberattack, adding that less than 100,000 people were affected.
However, the Washington Post reported on Friday that the cyberattack also compromised documents including “detailed schematics, confidential agreements, equipment lists, budget spreadsheets, internal photos and hardware blueprints for security systems.”
The available information taken was “hundreds of gigabytes,” the newspaper reported.




No standard definition of ‘fairness?’
How AI Can Help with the Detection of Financial Crimes
According to Dickie, AI can have a significant impact in data-rich domains where prediction and pattern recognition play an important role. For instance, in areas such as risk assessment and fraud detection in the banking sector, AI can identify aberrations by analyzing past behaviors. But, of course, there are also concerns around issues such as fairness, interpretability, security and privacy.




It gets complicated fast…
An Analysis of the Consequences of the General Data Protection Regulation on Social Network Research
This article examines the principles outlined in the General Data Protection Regulation (GDPR) in the context of social network data. We provide both a practical guide to GDPR-compliant social network data processing, covering aspects such as data collection, consent, anonymization and data analysis, and a broader discussion of the problems emerging when the general principles on which the regulation is based are instantiated to this research area.




Why did you do that, Mr. Terminator?
TED: Teaching AI to Explain its Decisions
Artificial intelligence systems are being increasingly deployed due to their potential to increase the efficiency, scale, consistency, fairness, and accuracy of decisions. However, as many of these systems are opaque in their operation, there is a growing demand for such systems to provide explanations for their decisions. Conventional approaches to this problem attempt to expose or discover the inner workings of a machine learning model with the hope that the resulting explanations will be meaningful to the consumer. In contrast, this paper suggests a new approach to this problem.




Businesses exist to take risks. Lawyers exist to avoid risks?



Friday, June 21, 2019


Computer Security is about making sure this can not happen.
Julie Anderson reports:
An IT error resulted in the deletion of patient records Tuesday from the Creighton University Campus Pharmacy at 2412 Cuming St.
The lost data includes prescription and refill history and insurance information for all customers. A count of customers wasn’t immediately available, but the pharmacy filled 50,000 prescriptions in 2017.
The incident did not involve breach, Creighton officials said. No patient records were stolen; the data was deleted.
However, the loss means that the pharmacy’s database must be rebuilt. All patient data must be re-entered and new prescriptions obtained from physicians.
Read more on Live Well Nebraska.
So… was this/is this pharmacy a HIPAA-covered entity? It would seem that it almost certainly is. So where was its risk assessment? And did they really have no backup?
This may not be a reportable breach under HIPAA and HITECH, but HHS OCR should be auditing them and looking into this if they are covered by HIPAA.


(Related) or this.
Frédéric Tomesco reports:
More than 2.9 million Desjardins Group members have had their personal information compromised in a data breach targeting Canada’s biggest credit union.
The incident stems from “unauthorized and illegal use of internal data” by an employee who has since been fired, Desjardins said Thursday in a statement. Computer systems were not breached, the cooperative said. [But the data was… Bob]
Names, dates of birth, social insurance numbers, addresses and phone numbers of about 2.7 million individual members were released to people outside the organization, Desjardins said. Passwords, security questions and personal identification numbers weren’t compromised, Desjardins stressed. About 173,000 business customers were also affected.
Read more on Montreal Gazette.
The statement from Desjardins Group does not offer any explanation of the former employee’s “ill-intentioned” conduct. Was the employee selling the data to criminals? Were they selling it to spammers? Were they giving it to a competitor? It would be easier to evaluate the risk to individuals if they knew more about the crime itself, I think.




Anyone can (and eventually will) screw up.
Thomas Brewster reports:
Investigators at the FBI and the DHS have failed to conceal minor victims’ identities in court documents where they disclosed a combination of teenagers’ initials and their Facebook identifying numbers—a unique code linked to Facebook accounts. Forbes discovered it was possible to quickly find their real names and other personal information by simply entering the ID number after “facebook.com/”, which led to the minors’ accounts.
In two cases unsealed this month, multiple identities were easily retrievable by simply copying and pasting the Facebook IDs from the court filings into the Web address.
Read more on Forbes.




Auditors are a skeptical bunch.
Cyber Crime Widely Underreported Says ISACA 2019 Annual Report on Cyber Security Trends
The headliner of the most recent part of the cyber security trends report is the underreporting of cyber crime around the globe, which appears to have become normalized. About half of the respondents indicated that they feel that most enterprises do not report all of the cyber crime that they experience, including incidents that they are legally obligated to disclose.
This is taking place in a cyber security landscape in which just under half of the respondents said that cyber attacks had increased in the previous year, and nearly 80% expect to have to contend with a cyber attack on their organization next year. And only a third of the cyber security leaders reported “high” confidence in the ability of their teams to detect and respond to such an attack.




Phishing for people who should know better.
Phishing Campaign Impersonates DHS Alerts
The Cybersecurity and Infrastructure Security Agency (CISA) has issued an alert on a phishing campaign using attachments that impersonate the Department of Homeland Security (DHS).
In an effort to make their attack successful, the phishers spoofed the sender email address to appear as a National Cyber Awareness System (NCAS) alert.
Using social engineering, the attackers then attempt to trick users into clicking the attachments, which were designed to appear as legitimate DHS notifications.
The attachments, however, are malicious, and the purpose of the attack was to lure the targeted recipients into downloading malware onto their systems.
… “CISA will never send NCAS notifications that contain email attachments. Immediately report any suspicious emails to your information technology helpdesk, security office, or email provider,” the alert concludes.




Maybe GDPR hasn’t solved all the problems yet.
Behavioural advertising is out of control, warns UK watchdog
The online behavioural advertising industry is illegally profiling internet users.
That’s the damning assessment of the U.K.’s data protection regulator in an update report published today, in which it sets out major concerns about the programmatic advertising process known as real-time bidding (RTB), which makes up a large chunk of online advertising.
In what sounds like a knock-out blow for highly invasive data-driven ads, the Information Commissioner’s Office (ICO) concludes that systematic profiling of web users via invasive tracking technologies such as cookies is in breach of U.K. and pan-EU privacy laws.
The adtech industry appears immature in its understanding of data protection requirements,” it writes.




I’m shocked, shocked I tell you!
Americans lack trust in social networks’ judgment to remove offensive posts, study finds
In a survey published Wednesday by Pew Research Center, 66 percent of Americans say social networks have a responsibility to delete offensive posts and videos. But determining the threshold for removal has been a tremendous challenge for such companies as Facebook, YouTube and Twitter — exposing them to criticism that they’ve been too slow and reactive to the relentless stream of abusive and objectionable content that populate their platforms.
When it comes to removing offensive material, 45 percent of those surveyed said they did not have much confidence in the companies to decide what content to take down.




More art than science and with strong regional bias?
Medicine contends with how to use artificial intelligence
Artificial intelligence (AI) is poised to upend the practice of medicine, boosting the efficiency and accuracy of diagnosis in specialties that rely on images, like radiology and pathology. But as the technology gallops ahead, experts are grappling with its potential downsides. One major concern: Most AI software is designed and tested in one hospital, and it risks faltering when transferred to another. Last month, in the Journal of the American College of Radiology, U.S. government scientists, regulators, and doctors published a road map describing how to convert research-based AI into software for medical imaging on patients. Among other things, the authors urged more collaboration across disciplines in building and testing AI algorithms and intensive validation of them before they reach patients. Right now, most AI in medicine is used in research, but regulators have already approved some algorithms for radiologists. Many studies are testing algorithms to read x-rays, detect brain bleeds, pinpoint tumors, and more.




Another collection of what ifs…
Death by algorithm: the age of killer robots is closer than you think
Right now, US machine learning and AI is the best in the world, [Debatable at best. Bob] which means that the US military is loath to promise that it will not exploit that advantage on the battlefield. “The US military thinks it’s going to maintain a technical advantage over its opponents,” Walsh told me.
That line of reasoning, experts warn, opens us up to some of the scariest possible scenarios for AI. Many researchers believe that advanced artificial intelligence systems have enormous potential for catastrophic failures — going wrong in ways that humanity cannot correct once we’ve developed them, and (if we screw up badly enough) potentially wiping us out.
In order to avoid that, AI development needs to be open, collaborative, and careful. Researchers should not be conducting critical AI research in secret, where no one can point out their errors. If AI research is collaborative and shared, we are more likely to notice and correct serious problems with advanced AI designs.




Probably, maybe.
The evolution of cognitive architecture will deliver human-like AI
But you can't just slap features together and hope to get an AGI
There's no one right way to build a robot, just as there's no singular means of imparting it with intelligence. Last month, Engadget spoke with Carnegie Mellon University associate research professor and the director of the Resilient Intelligent Systems Lab, Nathan Michael, whose work involves stacking and combining a robot's various piecemeal capabilities together as it learns them into an amalgamated artificial general intelligence (AGI). Think, a Roomba that learns how to vacuum, then learns how to mop, then learns how to dust and do dishes -- pretty soon, you've got Rosie from The Jetsons.




Lawyers are not always concerned about getting things done? I’m shocked!



Thursday, June 20, 2019


Why backups are important.
Florida City Pays $600,000 Ransom to Save Computer Records
The Riviera Beach City Council voted unanimously this week to pay the hackers’ demands, believing the Palm Beach suburb had no choice if it wanted to retrieve its records, which the hackers encrypted. The council already voted to spend almost $1 million on new computers and hardware after hackers captured the city’s system three weeks ago.
The hackers apparently got into the city’s system when an employee clicked on an email link that allowed them to upload malware. Along with the encrypted records, the city had numerous problems including a disabled email system, employees and vendors being paid by check rather than direct deposit and 911 dispatchers being unable to enter calls into the computer.
She conceded there are no guarantees that once the hackers received the money they will release the records. The payment is being covered by insurance.




You might think that government agencies would understand the laws and regulations they operate under. I stopped thinking that years ago.
Government error delays online pornography age-check scheme
An age-check scheme designed to stop under-18s viewing pornographic websites has been delayed a second time.
The changes - which mean UK internet users may have to prove their age - were due to start on 15 July after already being delayed from April 2018.
The culture secretary confirmed the postponement saying the government had failed to tell European regulators about the plan.
Completing the notification process could take up to six months. [So this is not a trivial process. Bob]




Some interesting statements.
Law Libraries Embracing AI
Craigle, Valeri, Law Libraries Embracing AI (2019). Law Librarianship in the Age of AI, (Ellyssa Valenti, Ed.), 2019, Forthcoming; University of Utah College of Law Research Paper. Available at SSRN: https://ssrn.com/abstract=3381798 or http://dx.doi.org/10.2139/ssrn.3381798
The utilization of AI provides insights for legal clients, future-proofs careers for attorneys and law librarians, and elevates the status of the information suite. AI training in law schools makes students more practice-ready in an increasingly tech-centric legal environment; Access to Justice initiatives are embracing AI’s capabilities to provide guidance to educational resources and legal services for the under-represented. AI’s presence in the legal community is becoming so common that it can no longer been seen as an anomaly, or even cutting edge. Some even argue that its absence in law firms will eventually be akin to malpractice. This chapter explores some practical uses of AI in legal education and law firms, with a focus on professionals who have gone beyond the role of AI consumers to that of AI developers, data curators and system designers…”




A field I should encourage my students to consider? It seems to attract a lot of money…
Stephen Schwarzman gives $188 million to Oxford to research AI ethics
Stephen Schwarzman, the billionaire founder of investment firm Blackstone, has given the University of Oxford its largest single donation in hundreds of years to help fund research into the ethics of artificial intelligence.
The £150 million ($188 million) contribution will fund an academic institute bearing the investor's name, the British university announced Wednesday.
The Stephen A. Schwarzman Centre for the Humanities will bring together all of Oxford's humanities programs under one roof — including English, history, linguistics, philosophy and theology and religion. It will also house a new Institute for Ethics in AI, which will focus on studying the ethical implications of artificial intelligence and other new technology. The institute is expected to open by 2024.
He made a $350 million gift to the Massachusetts Institute of Technology last year to set up the MIT Schwarzman College of Computing, which aims to "address the opportunities and challenges presented by the rise of artificial intelligence" including its ethical and policy implications.



Wednesday, June 19, 2019


Phishing, because it works!
645,000 Clients Affected in Oregon Department of Human Services Data Breach
Oregon Department of Human Services officials say they are notifying about 645,000 clients whose personal information is at risk from a January data breach.
The Statesman-Journal reports state officials announced the notifications Tuesday and will start mailing them Wednesday.
The breach happened during an email "phishing" attempt that targeted the department Jan. 8. Nine employees opened the email and clicked on a link that gave the perpetrator access to their email accounts.




What determines how (and how much) the police invest to solve a crime? If genetic matching is cheap, why not use it?
Should the police be able to investigate your genetic family tree for any crime, no matter how minor?
The New York Times – Want to See My Genes? Get a Warrant – Should the police be able to investigate your genetic family tree for any crime, no matter how minor? “…Genetic genealogy requires lots of DNA samples and an easy way to compare them. Americans have created millions of genetic profiles already. A 2018 study published in Science predicted that 90 percent of Americans of European descent will be identifiable from their DNA within a year or two, even if they have not used a consumer DNA service. As for easy access, GEDmatch’s website provides exactly this opportunity. Consumers can take profiles generated from other commercial genetic testing services, upload them free and compare them to other profiles. So can the police. We should be glad whenever a cold case involving a serious crimes like rape or murder can be solved. But the use of genetic genealogy in the Centerville assault case raises with new urgency fundamental questions about this technique…”




Imagine appeals based on the programming of the “judge.”
Developing Artificially Intelligent Justice
Re, Richard M. and Solow-Niederman, Alicia, Developing Artificially Intelligent Justice (May 19, 2019). Stanford Technology Law Review, Forthcoming; UCLA School of Law, Public Law Research Paper No. 19-16. Available at SSRN: https://ssrn.com/abstract=3390854
Artificial intelligence, or AI, promises to assist, modify, and replace human decision-making, including in court. AI already supports many aspects of how judges decide cases, and the prospect of “robot judges” suddenly seems plausible—even imminent. This Article argues that AI adjudication will profoundly affect the adjudicatory values held by legal actors as well as the public at large. The impact is likely to be greatest in areas, including criminal justice and appellate decision-making, where “equitable justice,” or discretionary moral judgment, is frequently considered paramount. By offering efficiency and at least an appearance of impartiality, AI adjudication will both foster and benefit from a turn toward “codified justice,” an adjudicatory paradigm that favors standardization above discretion. Further, AI adjudication will generate a range of concerns relating to its tendency to make the legal system more incomprehensible, data-based, alienating, and disillusioning. And potential responses, such as crafting a division of labor between human and AI adjudicators, each pose their own challenges. The single most promising response is for the government to play a greater role in structuring the emerging market for AI justice, but auspicious reform proposals would borrow several interrelated approaches. Similar dynamics will likely extend to other aspects of government, such that choices about how to incorporate AI in the judiciary will inform the future path of AI development more broadly.”




Wise, and therefore ignored?
EU lawmakers need to look beyond the ‘top layer’ when regulating the internet
Brussels policy makers could be forgiven for wanting to move quickly to regulate ‘the internet.’ Assailed by an avalanche of public opinion and a ‘techlash’ against many of the tech giants, politicians and legislators have quickly sought to target those that loom large. Unsurprisingly, this has meant a disproportionate focus on the well-known consumer facing technology platforms.
Outwardly, this may seem like a sensible move. But problems occur when policy makers see these large tech platforms as ‘the internet,’ when in fact they are nothing more than the ‘top layer’ — the proverbial tip of the iceberg. Policy ideas and initiatives that underestimate the complexity of the internet ecosystem, with all its different parts, players, and business models, are dangerous and can ultimately lead to unintended consequences.




Interesting, but nothing much new.
The fourth Industrial revolution emerges from AI and the Internet of Things
Big data, analytics, and machine learning are starting to feel like anonymous business words, but they're not just overused abstract concepts—those buzzwords represent huge changes in much of the technology we deal with in our daily lives. Some of those changes have been for the better, making our interaction with machines and information more natural and more powerful. Others have helped companies tap into consumers' relationships, behaviors, locations and innermost thoughts in powerful and often disturbing ways. And the technologies have left a mark on everything from our highways to our homes.




Perspective. Another step closer to the death of cash?
Facebook Is Launching Its Own Cryptocurrency
Libra would allow you to send money to “almost anyone with a smartphone” quickly and at “low to no cost”. Over time, Facebook hopes you’ll be able to use Libra to pay for other products and services, just as you would with Google Pay and Apple Pay.
Libra isn’t all about Facebook. Instead, the Libra Association will oversee the digital currency independent of Facebook. Members of the Libra Association include Visa, Mastercard, PayPal, eBay, Uber, Spotify, and a host of venture capital firms.
Just like Bitcoin and other cryptocurrencies, Libra will be built on the foundation of a blockchain. However, Facebook is hoping to avoid fluctuations in value by pegging Libra to real-world currencies such as the US dollar and the Euro.




Tools.
Python Could Rule the Machine Learning/A.I. World
According to a developer survey by JetBrains (which also introduced Kotlin, the up-and-coming language for Android development), some 49 percent say they use Python for data analytics, ahead of web development (46 percent), machine learning (42 percent), and system administration (37 percent).
This data just reinforces the general idea that Python is swallowing the data-analytics space whole. Although highly specialized languages such as R have their place among academics and more research-centric data analysts, it’s clear that Python’s relative ease of use (not to mention its ubiquity) has made it many friends among those who need to crunch data for some aspect of their jobs.



Tuesday, June 18, 2019


Spoiler alert: Not many.
U.S. Cyber Command, Russia and Critical Infrastructure: What Norms and Laws Apply?
Damaging critical infrastructure is clearly be out of bounds as responsible peacetime state behavior and would likely violate international law. But do these types of intrusions – seemingly intended to prepare for future operations or deter them, or both, without causing any actual harm – also run counter to applicable non-binding norms or violate international law during peacetime?


(Related)
Russia Says Victim of US Cyberattacks 'for Years'




Catching up with new technology, slowly.
Mike Maharrey writes:
SANTA FE, N.M. (June 14, 2019) – Today, a New Mexico law goes into effect that limits the warrantless use of stingray devices to track people’s location and sweep up electronic communications, and more broadly protects the privacy of electronic data. The new law will also hinder the federal surveillance state.
Sen. Peter Wirth (D) filed Senate Bill 199 (SB199 ) on Jan. 8. Titled the “Electronic Communications Privacy Act,” the new law will help block the use of cell site simulators, known as “stingrays.” These devices essentially spoof cell phone towers, tricking any device within range into connecting to the stingray instead of the tower, allowing law enforcement to sweep up communications content, as well as locate and track the person in possession of a specific phone or other electronic device.
The law requires police to obtain a warrant or wiretap order before deploying a stingray device, unless they have the explicit permission of the owner or authorized possessor of the device, or if the device is lost or stolen. SB199 includes an exception to the warrant requirement for emergency situations. Even then, police must apply for a warrant within 3 days and destroy any information obtained if the court denies the application.
Read more on Tenth Amendment Center.




Failure to manage?
At least 50,000 license plates leaked in hack of border contractor not authorized to retain them
At least 50,000 American license plate numbers have been made available on the dark web after a company hired by Customs and Border Protection was at the center of a major data breach, according to CNN analysis of the hacked data. What's more, the company was never authorized to keep the information, the agency told CNN.
"CBP does not authorize contractors to hold license plate data on non-CBP systems," an agency spokesperson told CNN.
The admission raises questions about who's responsible when the US government hires contractors to surveil citizens, but then those contractors mishandle the data.
"This data does have to be deleted," the CBP spokesperson said, though the agency didn't clarify the specifics of the policy that would apply to Perceptics.
Last week, CBP said in a statement that "none of the image data has been identified on the Dark Web or internet," though CNN was able to still find it.




An unintended consequence?
GDPR Has Been a Boon for Google and Facebook
Europe’s privacy laws have pushed advertisers to give business to the tech giants they trust
The General Data Protection Regulation, or GDPR, which went into effect across the European Union last year, has pushed marketers to spend more of their ad dollars with the biggest players, in particular Alphabet Inc.’s Google and Facebook Inc., ad-tech companies and media buyers say.




Trivial for an AI.
Odia Kagan of FoxRothschild writes:
Whenever we make a call, go to work, search the web, pay with our credit card, we generate data. While de-identification might have worked in the past, it doesn’t really scale to the type of large-scale datasets being collected today.”
It turns out that “four random points (i.e. time and location where a person has been) are enough to uniquely identify someone 95 percent of the time in a dataset with 1.5 million individuals…”
All these results lead to the conclusion that an efficient enough, yet general, anonymization method is extremely unlikely to exist for high-dimensional data — say Y.A. de Montjoye and A. Gadotti.




How would we ‘vet’ the data that trains an AI?
The unforeseen trouble AI is now causing
AI has come a long way in recent years — but as many who work with this technology can attest, it is still prone to surprising errors that wouldn’t be made by a human observer. While these errors can sometimes be the result of the required learning curve for artificial intelligence, it is becoming apparent that a far more serious problem is posing an increasing risk: adversarial data.
For the uninitiated, adversarial data describes a situation in which human users intentionally supply an algorithm with corrupted information. The corrupted data throws off the machine learning process, tricking the algorithm into reaching fake conclusions or incorrect predictions.
As a biomedical engineer, I view adversarial data as a significant cause for concern. UC Berkeley professor Dawn Song notably tricked a self-driving car into thinking that a stop sign says the speed limit is 45 miles per hour.
Interestingly, adversarial data output can occur even without malicious intent. This is largely because of the way algorithms can “see” things in the data that we humans are unable to discern. Because of that “visibility,” a recent case study from MIT describes adversarial examples as “features” rather than bugs.
As Moazzam Khan noted at Security Intelligence, there are two main types of attacks that rely on adversarial data: poisoning attacks, in which “the attacker provides input samples that shift the decision boundary in his or her favor,” and evasion attacks, in which “an attacker causes the model to misclassify a sample.”




Should we feed in the speech of political candidates?
MACHINE LEARNING SAYS ‘SOUND WORDS’ PREDICT PSYCHOSIS
The researchers also developed a new machine-learning method to more precisely quantify the semantic richness of people’s conversational language, a known indicator for psychosis.
Their results show that automated analysis of the two language variables—more frequent use of words associated with sound and speaking with low semantic density, or vagueness—can predict whether an at-risk person will later develop psychosis with 93 percent accuracy.
Even trained clinicians had not noticed how people at risk for psychosis use more words associated with sound than the average, although abnormal auditory perception is a pre-clinical symptom.
Trying to hear these subtleties in conversations with people is like trying to see microscopic germs with your eyes,” says Neguine Rezaii, first author of the paper in npj Schizophrenia. “The automated technique we’ve developed is a really sensitive tool to detect these hidden patterns. It’s like a microscope for warning signs of psychosis.”
Original Study DOI: 10.1038/s41537-019-0077-9