Saturday, July 06, 2019


Ransomware can be defeated!
Warwick Ashford reports:
St John Ambulance has reported that it was hit by a ransomware attack this week, but was able to isolate the attack and resolve it within half an hour.
Fortunately, the ransomware did not affect operational systems, but blocked access to the charity’s booking system for training courses and encrypted customer data.
Read more on ComputerWeekly.
[From the article:
The charity has been praised for its swift, effective and transparent response to the ransomware attack, which is currently the most common cyber criminal activity affecting individuals and businesses in the UK, according to the police.
… “The best way to prevent ransomware attacks is for companies to ensure they are not vulnerable by following best practices on cyber security basics to ensure good cyber hygiene,” said Jones.
Having good, functional data backups, treating your data as an asset, having appropriate policies around your data, and having incident response available to you are all simple ways of mitigating the harm from ransomware, which is the most prevalent form of attack we see.”




This could get interesting. (But, I can’t find it on LiveMint)
Prathma Sharma reports:
The Supreme Court on Friday issued notice to the Centre and the Unique Identification Authority of India (UIDAI) in a petition challenging the validity of the 2019 Aadhaar Ordinance.
The petition challenged the Aadhaar and Other Laws (Amendment) Ordinance, 2019 and the Aadhaar (Pricing of Aadhaar Authentication Services) Regulations, 2019, alleging that these violate fundamental rights guaranteed under the Constitution.
The notice was issued by a division bench of Justices SA Bobde and BR Gavai.
The petition said “Aadhaar database lacks integrity as it has no value other than, at most the underlying documents on the basis of which the Aadhaar numbers are issued… none of the data uploaded at the time of enrollment is verified by anyone/ much less a government official.
Read more on LiveMint




Still not there. Would we understand the explanation if we got it?
I, BLACK BOX: EXPLAINABLE ARTIFICIAL INTELLIGENCE AND THE LIMITS OF HUMAN DELIBERATIVE PROCESSES
Much has been made about the importance of understanding the inner workings of machines when it comes to the ethics of using artificial intelligence (AI) on the battlefield. Delegates at the Group of Government Expert meetings on lethal autonomous weapons continue to raise the issue. Concernsexpressed by legal and scientific scholars abound. One commentator sums it up: “for human decision makers to be able to retain agency over the morally relevant decisions made with AI they would need a clear insight into the AI black box, to understand the data, its provenance and the logic of its algorithms.”
The underlying premise of such arguments is that if humans are making decisions on the ground, then other humans farther up the food chain in battlefield decision-making — commanders, political leadership, analysts, and so forth — will be able to find out why they made those decisions and respond accordingly. If algorithms are making these decisions, the thinking goes, we’ll have no such insight, and we’ll lose meaningful human control. But psychology research shows that we humans are not nearly as explainable as we give ourselves credit for, so we might be overstating the meaningfulness of the human control we thought we had in the first place.
Enter “explainable artificial intelligence,sometimes called XAI. With algorithms that can explain their decision-making processes — in a way that humans often can’t — technology could increase, rather than decrease, the likelihood that those decision-makers who are not on the ground will get an accurate answer as to why a given decision was made.




Perspective. No big deal if you are too young to remember Sputnik.
Amazon Seeks Permission to Launch 3,236 Internet Satellites




Cute, with a smattering of truth.
Hicks column: Tales of horror and suspense from Charleston’s internet outage
The survivors will tell these stories for generations.
Earlier this week, Charleston endured a horrifying glimpse of how fragile modern civilization really is ... for nearly 12 whole hours.
It started around noon Tuesday, when children across the Lowcountry reported acute — and epidemic — boredom. Soon, millennials were denied food, coffee and other basic necessities when some businesses demanded payment in cash.
Netflix binges ended midstream, people couldn’t order cat food from Amazon Prime, and overweight white men were denied their God-given right to share doctored photos of Alexandria Ocasio-Cortez.
There was no joy in Summerville; the mighty internet had gone out.
… This is what the Dark Ages must’ve been like, but somehow Charlestonians endured.




Tools to help you listen, bookmark, or speed read…



Friday, July 05, 2019


For both my Security classes. (I do like good bad examples) If facial recognition is used to say “Hey, check this out!” I’d be happy. If it says, “Found target, launching weapons.” I’m a bit more concerned.
Biased and wrong? Facial recognition tech in the dock
The Californian city of San Francisco recently banned the use of FR by transport and law enforcement agencies in an acknowledgement of its imperfections and threats to civil liberties. But other cities in the US, and other countries around the world, are trialling the technology.
In the UK, for example, police forces in South Wales, London. Manchester and Leicester have been testing the tech to the consternation of civil liberties organisations such as Liberty and Big Brother Watch, both concerned by the number of false matches the systems made.
Just this week, academics at the University of Essex concluded that matches in the London Metropolitan police trials were wrong 80% of the time, potentially leading to serious miscarriages of justice and infringements of citizens' right to privacy.




A bit of Computer Security history.
Getting up to speed with AI and Cybersecurity
Many people are unaware that the first computer virus predates the public internet.
In 1971 Bob Thomas, an American IT academic wrote Creeper, the first computer program that could migrate across networks. It would travel between terminals on the ARPANET printing the message “I’m the creeper, catch me if you can”. Creeper was made self-replicating by fellow academic and email inventor, Ray Thomlinson, creating the first documented computer virus.
In order to contain Creeper, Thomlinson wrote Reaper, a program that would chase Creeper across the network and erase it – creating the world’s first antivirus cybersecurity solution.




Something for my Computer Security discussions.
THE BIGGEST CYBERSECURITY CRISES OF 2019 SO FAR




A ‘fake news” law? Think Russia will comply?
Will California’s New Bot Law Strengthen Democracy?
The New Yorker – “When you ask experts how bots influence politics—that is, what specifically these bits of computer code that purport to be human can accomplish during an election—they will give you a list: bots can smear the opposition through personal attacks; they can exaggerate voters’ fears and anger by repeating short simple slogans; they can overstate popularity; they can derail conversations and draw attention to symbolic and ultimately meaningless ideas; they can spread false narratives. In other words, they are an especially useful tool, considering how politics is played today.
On July 1st, California became the first state in the nation to try to reduce the power of bots by requiring that they reveal their “artificial identity” when they are used to sell a product or influence a voter. Violators could face fines under state statutes related to unfair competition. Just as pharmaceutical companies must disclose that the happy people who say a new drug has miraculously improved their lives are paid actors, bots in California—or rather, the people who deploy them—will have to level with their audience… By attempting to regulate a technology that thrives on social networks, the state will be testing society’s resolve to get our (virtual) house in order after more than two decades of a runaway Internet…”




Another good bad example. What is the ‘balance of power’ when companies have “big Data?”
Public Management of Big Data: Historical Lessons from the 1940s
Public Management of Big Data: Historical Lessons from the 1940s by Margo Anderson – Distinguished Professor, History and Urban Studies, University of Wisconsin-Milwaukee.
At its core, public-sector use of big data heightens concerns about the balance of power between government and the individual. Once information about citizens is compiled for a defined purpose, the temptation to use it for other purposes can be considerable, especially in times of national emergency. One of the most shameful instances of the government misusing its own data dates to the Second World War. Census data collected under strict guarantees of confidentiality was used to identify neighborhoods where Japanese-Americans lived so they could be detained in internment camps for the duration of the war.” – Executive Office of the President, Big Data: Seizing Opportunities, Preserving Values, May 2014


(Related) Will the lawyer with the “bigger Data” always win?
Methods of Data Research for Law
Custers, Bart, Methods of Data Research for Law (October 28, 2018). Custers B.H.M. (2018), Methods of data research for law. In: Mak V., Tjong Tjin Tai E., Berlee A. (Eds.) Research Handbook in Data Science and Law. Research Handbooks in Information Law Cheltenham: Edward Elgar. 355-377. Available at SSRN: https://ssrn.com/abstract=3411873
Data science and big data offer many opportunities for researchers, not only in the domain of data science and related sciences, but also for researchers in many other disciplines. The fact that data science and big data are playing an increasingly important role in so many research areas raises the question whether this also applies to the legal domain. Do data science and big data also offer methods of data research for law? As will be shown in this chapter, the answer to this question is positive: yes, there are many methods and applications that may be also useful for the legal domain. This answer will be provided by discussing these methods of data research for law in this chapter. As such, this chapter provides an overview of these methods.”




Crisis or not?
Opinion: Legislative Fix Needed to Keep Internet Applications Free in California
No matter where you go in California, you’re likely to see someone on their smartphone, tablet or computer using an app or other online service to look something up, watch a video, send an email, check social media or the weather, or any number of the dozens of things people do online every day.
Today, most of these online activities are free, meaning whether you are a college student researching a paper, a single mom looking for a job, or a small employer sending an email to your staff, the internet provides everyone from all walks of life equal and free access to information and services critical to our everyday lives. But these free services that have helped level the socio-economic playing field are at risk unless a policy fix is passed in Sacramento this year.
In 2018, the State Legislature hastily passed a sweeping measure known as the California Consumer Privacy Act (CCPA). This law was intended to give consumers more understanding and control of their online personal data and information, something we all support.
Many flaws did come to light. One of the most significant has to do with language in the CCPA that hinders tailored online advertising by prohibiting the sharing of technical information necessary to make the ads work. These ads are a major reason why many online services are free now, and unless fixed this year, this flaw in the CCPA could result in new costs for online services we take for granted and get for free today.
A policy fix would clarify that when a consumer opts-out of the “sale” of their personal information, it does not restrict the ability of companies to continue to market targeted ads to that same consumer as long as those ads rely on sharing technical information only, not personally identifiable information.


(Related) Is this ‘personal information” or merely technical information?
Fingerprinting’ to Track Us Online Is on the Rise. Here’s What to Do.
The New York Times – Advertisers are increasingly turning to an invisible method that pulls together information about your device to pinpoint your identity. “Fingerprinting involves looking at the many characteristics of your mobile device or computer, like the screen resolution, operating system and model, and triangulating this information to pinpoint and follow you as you browse the web and use apps. Once enough device characteristics are known, the theory goes, the data can be assembled into a profile that helps identify you the way a fingerprint would.
And here’s the bad news: The technique happens invisibly in the background in apps and websites. That makes it tougher to detect and combat than its predecessor, the web cookie, which was a tracker stored on our devices. The solutions to blocking fingerprinting are also limited…”




I had hoped to stay away from the whole Block Chain / Bitcoin kerfuffle.
Facebook’s Libra Cryptocurrency Could Have Profound Implications for Personal Privacy
In the never-ending search for new revenue streams, social media giant Facebook is now looking to launch its very own cryptocurrency, known as Libra. While the stated goal of Facebook’s Libra cryptocurrency is to bring financial services to the world’s 1.7 billion unbanked population, and to make it easier and more convenient than ever before to send and receive money around the world, data privacy experts, politicians, regulators, central bankers and government officials are already warning that the latest innovation from Facebook may result in a confusing headache of privacy, financial, political, and socio-economic issues.



Thursday, July 04, 2019


Whatever you do, don’t act like a criminals covering their tracks...
Georgia Failed to Subpoena Image of Wiped Elections Server
Nearly two years ago, state lawyers in a closely watched election integrity lawsuit said they intended to subpoena the FBI for the forensic image, or digital snapshot, the agency made of a crucial server before state election officials quietly wiped it clean. Election watchdogs want to examine the data to see if there might have been tampering, given that the server was left exposed by a gaping security hole for more than half a year.
A new email obtained by The Associated Press says state officials never did issue the subpoena.
The FBI's data is central to activists' challenge to Georgia's highly questioned, centrally administered elections system, which lacks an auditable paper trail and was run at the time by Gov. Brian Kemp, then Georgia's secretary of state.
The plaintiffs contend Kemp's handling of the wiped server is the most glaring example of mismanagement that could be hiding evidence of vote tampering. They have been fighting for access to the state's centralized black-box voting systems and to individual voting machines, many of which they say have also been wiped clean.




Shucks! Now I’ll have to make my own.
On July 2nd, YouTube Help added “more examples of content that violates” its policy regarding “harmful or dangerous content.” The list includes “Extremely dangerous challenges,” “Violent events,” and “Eating disorders.”
The last item is now “Instructional hacking and phishing,” which the video site identifies as “showing users how to bypass secure computer systems or steal user credentials and personal data.”




Coming soon to a country near me?
Cookie consent – What “good” compliance looks like according to the ICO
On 3 July 2019, the UK data protection authority (the ICO) updated its guidance on the rules that apply to the use of cookies and other similar technologies. The ICO has also changed the cookie control mechanism on its own website to mirror the changes in the new guidance.
  • The use of cookie walls as a blanket approach to restrict access to a service until users consent will not comply with the cookie consent requirements.
  • Implied consent is also no-go.
  • The ICO also views consent mechanisms that emphasise that users should ‘agree’ or ‘allow’ cookies over ‘reject’ or ‘block’ as non-compliant. It calls this ‘nudge behaviour’ which influences users towards the ‘accept’ option.




Because it came up in both my classes yesterday.
US Journalist Detained When Returning to US
Pretty horrible story of a US journalist who had his computer and phone searched at the border when returning to the US from Mexico.
The EFF has extensive information and advice about device searches at the US border, including a travel guide.
If you are a U.S. citizen, border agents cannot stop you from entering the country, even if you refuse to unlock your device, provide your device password, or disclose your social media information. However, agents may escalate the encounter if you refuse. For example, agents may seize your devices, ask you intrusive questions, search your bags more intensively, or increase by many hours the length of detention. If you are a lawful permanent resident, agents may raise complicated questions about your continued status as a resident. If you are a foreign visitor, agents may deny you entry.




Another take on the topic.
Ethics in the Age of Artificial Intelligence
If we don’t know how AIs make decisions, how can we trust what they decide?
We are standing at the cusp of the next wave of the technological revolution: AI, or artificial intelligence. The digital revolution of the late 20th century brought us information at our fingertips, allowing us to make quick decisions, while the agency to make decisions, fundamentally, rested with us. AI is changing that by automating the decision-making process, promising better qualitative results and improved efficiency.
Unfortunately, in that decision-making process, AI also took away the transparency, explainability, predictability, teachability and auditability of the human move, replacing it with opacity. The logic for the move is not only unknown to the players, but also unknown to the creators of the program. As AI makes decisions for us, transparency and predictability of decision-making may become a thing of the past.




Anyone can (and will) play.
Make: a machine-learning toy on open-source hardware
In the latest Adafruit video (previously ) the proprietors, Limor "ladyada" Friend and Phil Torrone, explain the basics of machine learning, with particular emphasis on the difference between computing a model (hard) and implementing the model (easy and simple enough to run on relatively low-powered hardware), and then they install and run Tensorflow Light on a small, open-source handheld and teach it to distinguish between someone saying "No" and someone saying "Yes," in just a few minutes. It's an interesting demonstration of the theory that machine learning may be most useful in tiny, embedded, offline processors.




Siri, draft a smarter bill.”
California’s AB-1395 Highlights the Challenges of Regulating Voice Recognition
Under the radar of ongoing debates over the California Consumer Privacy Act (CCPA), the California Senate Judiciary Committee will also soon be considering, at a July 9th hearing, an unusual sectoral privacy bill regulating “smart speakers.” AB-1395 would amend California’s existing laws to add new restrictions for “smart speaker devices,” defined as standalone devices with an integrated virtual assistant connected to a cloud computing storage service that uses hands-free verbal activation.” Physical devices like the Amazon Echo, Google Home, Apple HomePod, and others (e.g. smart TVs or speakers produced by Sonos or JBL that have integrated Alexa or Google Assistant), would be included, although the bill exempts the same cloud-based voice services when they are integrated into cell phones, tablets, or connected vehicles.




Let’s replace all that new technology we don’t understand with old technology we can understand, like locks and keys!” Not very urgent if it took three years to pass.
U.S. Government Makes Surprise Move To Secure Power Grid From Cyberattacks
Homeland Security officials say that Russian hackers used conventional tools to trick victims into entering passwords in order to build out a sophisticated effort to gain access to control rooms of utilities in the U.S. The victims included hundreds of vendors that had links to nuclear plants and the electrical grid.
Nations have been trying to secure the industrial control systems that power CNI for years. The challenge lies in the fact that these systems were not built with security in mind, because they were not originally meant to be connected to the internet. [They were not built with Internet security in mind. The new bits were! Bob]
It is with this in mind that the U.S. has responded with a new strategy: rather than bringing in new technology and skills, it will use analog and manual technology to isolate the grid's most important control systems. This, the government says, will limit the reach of a catastrophic outage.
"This approach seeks to thwart even the most sophisticated cyber-adversaries who, if they are intent on accessing the grid, would have to actually physically touch the equipment, thereby making cyberattacks much more difficult," said a press release as the Securing Energy Infrastructure Act (SEIA), passed the Senate floor.
When introducing the bill in 2016, U.S. Senators Angus King (I-Maine) and Jim Risch (R-Idaho) said: "Specifically, it will examine ways to replace automated systems with low-tech redundancies, like manual procedures controlled by human operators."



Wednesday, July 03, 2019


I wonder how far behind government patching is?
US Cyber Command issues alert about hackers exploiting Outlook vulnerability
US Cyber Command has issued an alert via Twitter today about threat actors abusing an Outlook vulnerability to plant malware on government networks.
The vulnerability is CVE-2017-11774, a security bug that Microsoft patched in Outlook in the October 2017 Patch Tuesday.




As one of the very few who do not own a smartphone, I would immediately come under suspicion: What is he trying to hide? Clearly he tossed the phone rather than be caught with subversive material.
China Is Forcing Tourists to Install Text-Stealing Malware at its Border
Foreigners crossing certain Chinese borders into the Xinjiang region, where authorities are conducting a massive campaign of surveillance and oppression against the local Muslim population, are being forced to install a piece of malware on their phones that gives all of their text messages as well as other pieces of data to the authorities, a collaboration by Motherboard, Süddeutsche Zeitung, the Guardian, the New York Times, and the German public broadcaster NDR has found.
The Android malware, which is installed by a border guard when they physically seize the phone, also scans the tourist or traveller's device for a specific set of files, according to multiple expert analyses of the software. The files authorities are looking for include Islamic extremist content, but also innocuous Islamic material, academic books on Islam by leading researchers, and even music from a Japanese metal band.




Was it really that hard to comply?
TikTok now faces a data privacy investigation in the UK, too
TikTok is under investigation in the UK for how it handles the safety and privacy of young users. UK Information Commissioner Elizabeth Denham told a parliamentary committee on Tuesday that the popular short-form video app potentially violated GDPR rules that state that technology companies must have different rules and protections for children, reported The Guardian. The UK began its probe on TikTok back in February, shortly after the FTC fined the app for child privacy violations.




Available in November?
GDPR For Dummies




Curses! Foiled again.
House lawmakers officially ask Facebook to put Libra cryptocurrency project on hold
House Democrats are requesting Facebook halt development of its proposed cryptocurrency project Libra, as well as its digital wallet Calibra, until Congress and regulators have time to investigate the possible risks it poses to the global financial system.
… “If products and services like these are left improperly regulated and without sufficient oversight, they could pose systemic risks that endanger U.S. and global financial stability,” Water writes. “These vulnerabilities could be exploited and obscured by bad actors, as other cryptocurrencies, exchanges, and wallets have been in the past.”




For my geeks.
Facebook open-sources DLRM, a deep learning recommendation model
Facebook today announced the open source release of Deep Learning Recommendation Model (DLRM), a state-of-the-art AI model for serving up personalized results in production environments. DLRM can be found on GitHub, and implementations of the model are available for Facebook’s PyTorch, Facebook’s distributed learning framework Caffe2, and Glow C++.



Tuesday, July 02, 2019


Who would you like to win the 2020 election and by how much?
Internet Research Agency Twitter activity predicted 2016 U.S. election polls
First Monday – Volume 24, Number 7 – 1 July 2019 > Ruck “In 2016, the Internet Research Agency (IRA) deployed thousands of Twitter bots that released hundreds of thousands of English language tweets. It has been hypothesized that this affected public opinion during the 2016 U.S. presidential election. Here we test that hypothesis using vector autoregression (VAR) comparing time series of election opinion polling during 2016 versus numbers of re-tweets or ‘likes’ of IRA tweets. We find that changes in opinion poll numbers for one of the candidates were consistently preceded by corresponding changes in IRA re-tweet volume, at an optimum interval of one week before. In contrast, the opinion poll numbers did not correlate with future re-tweets or ‘likes’ of the IRA tweets. We find that the release of these tweets parallel significant political events of 2016 and that approximately every 25,000 additional IRA re-tweets predicted a one percent increase in election opinion polls for one candidate. As these tweets were part of a larger, multimedia campaign, it is plausible that the IRA was successful in influencing U.S. public opinion in 2016.”




Another checklist for my Computer Security students.
CYBER RESILIENCE – THE 6 BIGGEST THREATS RIGHT NOW FOR LEGAL
The threat constantly evolves and grows – and legal firms are at particular peril. In this article, we look at the 6 biggest current threats as we perceive them.




Because we can?
How Amazon and the Cops Set Up an Elaborate Sting Operation That Accomplished Nothing
… New documents obtained by Motherboard using a Freedom of Information request show how Amazon, Ring, a GPS tracking company, and the U.S. Postal Inspection Service collaborated on a package sting operation with the Aurora, Colorado Police Department in December. The operation involved equipping fake Amazon packages with GPS trackers, and surveilling doorsteps with Ring doorbell cameras in an effort to catch someone stealing a package on tape.
The documents show the design and implementation of a highly elaborate public relations stunt, which was designed both to endear Amazon and Ring with local law enforcement, and to make local residents fear the place they live. The parties were disappointed when the operation didn’t result in any arrests.




Interesting idea.
AI and the Social Sciences Used to Talk More. Now They’ve Drifted Apart.
Artificial intelligence researchers are employing machine learning algorithms to aid tasks as diverse as driving cars, diagnosing medical conditions, and screening job candidates. These applications raise a number of complex new social and ethical issues.
So, in light of these developments, how should social scientists think differently about people, the economy, and society? And how should the engineers who write these algorithms handle the social and ethical dilemmas their creations pose?
These are the kinds of questions you can’t answer with just the technical solutions,” says Dashun Wang, an associate professor of management and organizations at Kellogg. “These are fundamentally interdisciplinary issues.”




We will need thousands of very specific laws if we go this way.
Deepfake revenge porn distribution now a crime in Virginia
As of today, Virginia is one of the first states in the country to impose criminal penalties on the distribution of non-consensual "deepfake" images and video.
The new law amends existing law in the Commonwealth that defines distribution of nudes or sexual imagery without the subject's consent —often called revenge porn —as a Class 1 misdemeanor. The new bill updated the law by adding a category of "falsely created videographic or still image" to the text.
New laws in Virginia take effect on July 1.




Pick one (or all five) for your toolkit now, before something goes wrong.




For Kindle lovers…




For our programming students?