Saturday, October 26, 2019

Something other states should emulate.
Texas Cyberteam Helps Cities Respond to Ransomware Attacks
The Texas Military Department — the umbrella agency for the state's National Guard branches — will host hundreds of state, local and county officials at Camp Mabry in Austin on Thursday to show how its Cyber Incident Response Team plans to handle future attacks while offering tips to protect valuable software.
The Texas Military Department has recruited about 100 ransomware experts, a team made up of military and civilian volunteers, who were the first to respond to the August attacks, according to Winnek.

Should be of interest to my students who were labeling Microsoft a Cash Cow.
The Pentagon’s $10 billion deal to provide cloud computing services to the Department of Defense officially went to Microsoft Friday. The news came as an upset to Amazon, whose competing bid appeared to be the frontrunner for most of the contract’s deliberations. Throughout the yearlong process, President Donald Trump has repeatedly rebuked Amazon’s prospects, which seems in part an extenuation of his outspoken vendetta against the company and its CEO, Jeff Bezos.
According to a Washington Post report, Microsoft is now set to take over the Joint Enterprise Defense Infrastructure (JEDI) project, a potentially decade-long federal cloud computing initiative that has attracted interest from some of the biggest names in tech.

Friday, October 25, 2019

Probably not the best way to describe your security.
New York Times abruptly eliminates its "director of information security" position: "there is no need for a dedicated focus on newsroom and journalistic security"
Runa Sandvik (previously ) is a legendary security researcher who spent many years as a lead on the Tor Project; in 2016, the New York Times hired her as "senior director of information security" where she was charged with protecting the information security of the Times's newsroom, sources and reporters. Yesterday, the Times fired her, eliminating her role altogether, because "there is no need for a dedicated focus on newsroom and journalistic security."
If you are a source contemplating going to the Times with a story that could land you in physical, economic, or legal jeopardy, this is really sobering news: can you trust a news entity with your safety when it has eliminated the only person charged with defending it?

So, what do they cover?
AIG Is the Latest Insurer to Back Away from Cyber Insurance Coverage
In many ways, the case involving SS&C Technologies and AIG should be black and white, and not gray. In 2016, SS&C Technologies was involved in a major cyber attack in which Chinese hackers managed to dupe the company out of $5.9 million. Spoof emails purporting to come from one of the company’s clients – Tillage Commodities Fund – instructed the company to make six wire transfers to an unknown bank account holder in Hong Kong. This is the classic type of business email compromise (BEC) scam, in which a third party hacker poses as someone else via email in order to ensure that funds move into the hacker’s bank account. So, theoretically, this is exactly the type of incident that should have been covered under the AIG cyber insurance policy.
But there’s just one little problem here – SS&C Technologies acknowledged that the funds were “stolen” and not “lost,” and that automatically transformed the cyber incident into a criminal act. In short, says AIG, Chinese criminals stole the $5.9 million from a client account, and therefore, the cyber insurance policy no longer applies. According to AIG, the cyber insurance policy only covers losses from traditional cyber attacks (e.g. a DDoS attack taking down the company’s servers for days), and not from brazen criminal attacks. Thus, as AIG eventually told a court in the Southern District of New York, it should not be found guilty of “breach of contract.” An event involving a company victimized by suspected Chinese criminals simply is not covered by a cyber insurance policy.
Moreover, as more details of the case emerge, it’s clear that SS&C Technologies failed to have even the most basic form of cybersecurity defenses in place. For example, one request from the hackers to wire $3 million into a Hong Kong bank account simply included a brief introduction (“How was your weekend?”), followed by details of where to wire the money. Other emails appeared to be coming from a clearly spoofed email address, with the name of the client misspelled as “Tilllage” instead of “Tillage.” Other emails included awkward syntax, grammatical errors, and nonsensical sentence construction. short, it was the sort of shoddy, second-rate phishing email that is all too common these days. Surely, anyone with a modicum of common sense would have seen through this scam, right?
And, to make things even more damaging from the perspective of AIG, was the fact that SS&C failed to comply with its own internal policy, which clearly stated that any wire bank transfer needed to be authorized by four different people. This is exactly the sort of basic cyber defense that could have prevented the fraudulent transaction from taking place – at some point, wouldn’t a senior executive or top manager see through these obvious cyber shenanigans and stop the wire transfer from taking place? Thus, from the perspective of AIG, SS&C Technologies failed to exercise even a modicum of care and responsibility. How could SS&C Technologies even argue that the funds were “lost” and not “stolen”?

(Related) The victims should talk.
Ocala city loses over $500,000 due to spear-phishing attack
According to, the incident occurred when a scammer sent a phishing email to a city department.
The employee mistook the email to be legitimate and inadvertently transferred $640,000 to a fraudulent bank account set up by the scammer.
In light of the incident, the city has planned to conduct an internal investigation to know the methods and scope of a phishing attack. Later, it will make changes in policy to avoid such attacks in the future.

Security that kills? I suspect they installed security that was initially rejected as too impactive. When you have a breach, you “gotta do something!”
Ransomware and data breaches linked to uptick in fatal heart attacks
Imagine a scenario where you have a medical emergency, you head to the hospital, and it is shut down. On a Friday morning in September, this hypothetical became a reality for a community in northeast Wyoming.
Campbell County Health reported a systemwide crippling of their computers that affected its flagship hospital and nearly 20 clinics located in the city of Gillette. For eight hours, the hospital’s emergency department was forced to transfer patients even though the next nearest hospital was located 70 miles away.
New research finds that at hospitals that experienced a data breach, the death rate among heart attack patients increased in the months and years afterward. This increased mortality doesn’t appear to be due to the perpetrators themselves — the hackers are not controlling the allocation of medications or doctors. Rather the issue may lie with how health care systems adjust their cybersecurity after an attack, according to a study published in October’s issue of Health Services Research.
Cybersecurity remediation at hospitals appears to be slowing down doctors, nurses and other health professionals as they offer emergency cardiac care, based on this new study.
After data breaches, as many as 36 additional deaths per 10,000 heart attacks occurred annually at the hundreds of hospitals examined in the new study.

Looks costly. I don’t think they like it either.
Increased Surveillance is Not an Effective Response to Mass Violence
This week, Senator Cornyn introduced the RESPONSE Act, an omnibus bill meant to reduce violent crimes, with a particular focus on mass shootings. The bill has several components, including provisions that would have significant implications for how sensitive student data is collected, used, and shared. The most troubling part of the proposal would broaden the categories of content schools must monitor under the Children’s Internet Protection Act (CIPA); specifically, schools would be required to “detect online activities of minors who are at risk of committing self-harm or extreme violence against others.”
Unfortunately, the proposed measures are unlikely to improve school safety; there is little evidence that increased monitoring of all students’ online activities would increase the safety of schoolchildren, and technology cannot yet be used to accurately predict violence. The monitoring requirements would place an unmanageable burden on schools, pose major threats to student privacy, and foster a culture of surveillance in America’s schools. Worse, the RESPONSE Act mandates would reduce student safety by redirecting resources away from evidence-based school safety measures.

Lots of detail.
US prisons and jails using AI to mass-monitor millions of inmate calls
New technology driven by artificial intelligence (AI) is helping prison wardens and sheriffs around the country crack unsolved crimes and thwart everything from violence and drug smuggling to attempted suicides – in near real time, in some cases – through digitally mass-monitoring millions of phone calls inside the nation’s sprawling prison and jail systems.
Despite legally-mandated warnings preceding every prison phone call that the conversation is being recorded and monitored, inmates still regularly reveal astonishing amounts of incriminating information, according to technology company records provided to ABC News and interviews with law enforcement and corrections officials using the systems in multiple states.

Alcohol sniffers in cars, bomb sniffers at airports, the uses are limitless.
Google researchers taught an AI to recognize smells
Their algorithms can identify odors based on their molecular structures.
As Wired points out, there are a few caveats, and they are what make the science of smell so tricky. For starters, two people might describe the same scent differently, for instance "woody" or "earthy." Sometimes molecules have the same atoms and bonds, but they're arranged as mirror images and have completely different smells. Those are called chiral pairs; caraway and spearmint are just one example. Things get even more complicated when you start combining scents.

Yes, on some technical issues. No, based on personalities.
Why An Amazon-Oracle Merger Is A Very Real Possibility
Per Trefis analysis, a merger of Amazon and Oracle could unlock significant value. While the idea may sound very ambitious, in order to keep itself at the top of the cloud technology food-chain, Oracle may be the best acquisition Amazon could ever make.

Open Access Resources for Legal Research
Via Lyonette Louis-Jacques, The University of Chicago | D’Angelo Law Library – In honor of International Open Access Week, our library created an “Open Access Resources for Legal Research LibGuide. These are some representative free law sources. The focus is on U.S. law, but there’s a foreign and international law section.”

Thursday, October 24, 2019

Exactly. That’s what is so scary. “You hack my electric grid, I nuke your tractor factory!”
'No such thing' as cyber warfare: Australia's head of cyber warfare
Warfare is warfare, espionage is internationally normal, and cyber is just one of a suite of potential capabilities for a military response, says Major General Marcus Thompson.
… "Any response that the government might choose to make that involves the military could occur using any capabilities that the military has available, including of course capabilities that sit within ADF [Australian Defence Force] and the Australian Signals Directorate [ASD]," he said.
"A military response would be one of any number of options, or could be part of a suite of options, that the government of the day could consider."

No doubt the FBI will point to laws like these and insist we are falling behind. Sweden seems a much different world.
Swedish police cleared to deploy spyware against crime suspects
Spyware should be able to turn on device cameras and microphones, get encrypted chat logs.
The new technical capabilities granted to Swedish police are part of a 34-point plan to upgrade law enforcement powers when investigating gang or violent crimes.
Damberg said that granting police the legal and technical capabilities to intercept encrypted communications was a top priority, as they were being left behind by criminal groups who now often use services like Signal and WhatsApp to coordinate operations.
The minister told local press that 90% of all the communications police have intercepted for investigations in recent years have been encrypted.
Damberg told local news outlet Omni that Malmö Police believe that there has not been a single murder in the city of Malmö in recent years that has not been preceded by communication between gang members in encrypted form.
More than a decade ago, German authorities began deploying a malware strain named the Bundestrojaner (Federal Trojan) as part of their investigations.
Sweden's police plan is similar, and they plan to deploy malware with spyware-like capabilities on suspects' devices. The idea is to listen in on encrypted audio or video calls in real-time, or extract chat logs from encrypted instant messaging apps.

(Related) Both sides of the encryption debate?
Rethinking Encryption
During the Federal Bureau of Investigation’s very public disagreement with Apple over encryption in 2016, I was the bureau’s general counsel and responsible for leading its legal efforts on that matter.
public safety officials should also become among the strongest supporters of widely available strong encryption.

The more the merrier?
A Balancing Act: A Brief Overview of California Privacy Laws

As impactive as the GDPR?
A.I. Regulation Is Coming Soon. Here’s What the Future May Hold
If you want to know how the global regulation of artificial intelligence might shape up in the coming years, best look to Berlin.
Last year Angela Merkel’s government tasked a new Data Ethics Commission with producing recommendations for rules around algorithms and A.I. The group’s report landed Wednesday, packed with ideas for guiding the development of this new technology in a way that protects people from exploitation.
The group—whose members work in academia, industry and regulation—also called for a mandatory labeling scheme that would apply to algorithmic systems that pose any potential threat to people’s rights, and said people affected by algorithmic decisions should be able to get “meaningful information” about how those decisions were reached.
It also called for an update to liability rules, to make sure companies can be punished for rights violations and bad decisions made by algorithms that would otherwise be made by human employees.
… “If Germany’s guidelines were to inspire the EU’s forthcoming A.I. legislation, the EU will indeed manage to set a global standard—a blueprint on what to do to fail in the digital economy,” Chivot wrote in a statement.

Do what I mean, not what I say! A TED talk.
The problem with artificial intelligence is that it will do exactly what we ask it to do

(Related) No one noticed? Don’t they compare the number of incoming reports with how they are directed?
Police database flagged 9,000 cybercrime reports as 'security risk'
Thousands of reports of cybercrime were quarantined on a police database instead of being investigated because software designed to protect the computer system labelled them a security risk.
The backlog at one point stretched to about 9,000 reports of cybercrime and fraud, some of them dating back to October last year. The reports had been made to Action Fraud and handed to the National Fraud Intelligence Bureau (NFIB), run by the City of London police.

I’ve used this a bit over the last couple of months. Definitely worth exploring.
AI2’s Semantic Scholar search engine now takes in the full sweep of scientific papers
Seattle’s Allen Institute for Artificial Intelligence says its academic search engine, Semantic Scholar, is now in high gear — thanks to a power boost from Microsoft that helped expand its reach to every field of science.
Over the course of just a few months, Semantic Scholar’s database has gone from indexing 40 million research papers in computer science and biomedicine to taking in more than 175 million papers. The database not only covers the time-honored physical sciences, but also political science and sociology, art and philosophy.

A safe way to train Ethical Hackers to use the TOR browser?
BBC News launches ‘dark web’ Tor mirror
BBC News: “The BBC has made its international news website available via Tor, in a bid to thwart censorship attempts. Tor is a privacy-focused web browser used to access pages on the dark web. The browser can obscure who is using it and what data is being accessed, which can help people avoid government surveillance and censorship. Countries including China, Iran and Vietnam are among those who have tried to block access to the BBC News website or programmes.
Instead of visiting or, users of the Tor browser can visit the new bbcnewsv2vjtpsuy.onion web address. Clicking this web address will not work in a regular web browser. The dark web copy of the BBC News website will be the international edition, as seen from outside the UK. It will include foreign language services such as BBC Persian, BBC Arabic and BBC Russian…”

Wednesday, October 23, 2019

Why governments are never lead adopters.
European Government Organizations Are Enthusiastic About Artificial Intelligence but Face Challenges Adopting It, According to Accenture Study
The study — based on a survey of 300 government leaders and senior information technology (IT) decision-makers in Finland, France Germany, Norway and the U.K.— found that the vast majority (90%) of respondents believe that AI will have a high impact on their organizations over the coming years. In addition, nearly the same number (86%) said that their organization plans to increase its spending on AI next year.
Customer service and fraud & risk management are the two operational areas favored most for public service AI deployments, cited by 25% and 23% of respondents, respectively. In addition, respondents most often cited increased efficiencies, cost or time savings, and enhanced productivity as the greatest anticipated benefits from their AI investments.
Despite the support and enthusiasm for AI deployments, government respondents said their organizations are experiencing systemic challenges to delivering successful AI projects. More than two-thirds (71%) cited difficulties in procuring the right AI building blocks — notably data integrity and processing capabilities; nearly six in seven (84%) cited challenges in adapting AI logic and reasoning to their industry context; and more than three-fourths (81%) said they experienced challenges integrating AI technologies into their back-office operations. In addition, more than two-fifths (42%) have security-related concerns around the use of AI and almost one-third (31%) said they lacked the necessary talent and skills to scale their AI investments.

Two definitions of antitrust?
Forty-six attorneys general have joined a New York-led antitrust investigation of Facebook
The expanded roster of states and territories taking part in the investigation reflects lingering, broad concerns among the country’s competition watchdogs that “Facebook may have put consumer data at risk, reduced the quality of consumers’ choices, and increased the price of advertising,” New York Attorney General Letitia James (D) said in a statement.
The Washington Post first reported on the states’ interest in joining the investigation. Will Castleberry, vice president for state and local policy at Facebook, said in a statement that the company would work “constructively with state attorneys general." He added, “People have multiple choices for every one of the services we provide.”

Tuesday, October 22, 2019

A friendly heads-up!
SIM-Jackers Can Empty Your Bank Account with a Single Phone Call
These days – as journalist and food writer Jack Monroe discovered last week, when £5,000 was stolen from her bank account – scammers can simply transfer your phone number to a new SIM card and gain access to every penny in your name.
This relatively new crime is known as "SIM-jacking", and works like this: perpetrators obtain important details about their victims either by scouring social media or conning them into divulging personal information. Using these details, they pose as their victims, convince network providers to transfer their numbers to new SIM cards and post out those SIMs. Once the swap is complete, messages containing codes for those two-factor authentication systems we now all have can be intercepted, and fraudsters can hop into your email, social media or mobile banking accounts.
In 2018, the BBC's Watchdog sent undercover reporters into Vodafone and O2 stores to see if they could obtain replacement SIM cards without proper ID checks. In both cases they walked away with the SIMs without having to undergo the checks.
"One of the reasons SIM-swap attacks are so effective is that many mobile phone carrier representatives are easy to socially engineer," explained a former black hat hacker, who dabbled in SIM swaps before going straight and becoming a white hat hacker. "An attacker can call your phone provider, pretend to be you and spin some story to get the support agent to transfer your number to a SIM. If he runs into any friction, he can hang up and try again with another agent."

HIPAA enforcement may need to get serious.
Healthcare Organizations have Become Hotbed for Phishing Email Attacks in First Quarter of 2019
A new study by Proofpoint reveals that there has been a 300% jump in imposter emails sent to healthcare organizations during the first quarter of 2019.
Other key findings included in Proofpoint’s ‘2019 Healthcare Threat report include:
  • 95% of targeted healthcare companies saw emails spoofing their trusted domains or patients. The spoofed domains belonged to business partners of the targeted healthcare companies.
  • Subject lines of 55% of all imposter email attacks included ‘payment’, request’ and ‘urgent’ related terms.

Why we update. No need to hack through front line security when the backdoor is wide open.
Outdated OSs Still Present in Many Industrial Organizations: Report
According to the latest data from CyberX, 62% of analyzed sites house devices running outdated and unsupported versions of Windows, such as Windows XP and 2000, and the percentage jumps to 71% if Windows 7, which reaches end of support in January 2020, is also included.
The use of Windows versions that no longer receive security updates poses a serious risk as it allows attackers to compromise systems using vulnerabilities for which details and PoC exploits are often publicly available. Moreover, the company pointed out, even if Microsoft releases patches for unsupported versions of Windows to address high-risk flaws, as it did in the case of the BlueKeep vulnerability, it may not be easy for an organization to deploy the patch on industrial systems.

Here now and ready for work.
Gartner Announces Top 10 Strategic Technology Trends For 2020
Today Gartner, Inc. announced its top ten strategic technology trends for 2020. Analysts presented their findings during Gartner IT Symposium in Orlando.
Gartner defines a strategic technology trend as “one with substantial disruptive potential that is beginning to break out of an emerging state into broader impact and use, or which is rapidly growing with a high degree of volatility reaching tipping points over the next five years.”

cause AIs are special? Or perhaps they are just like regular people?
Copyright Law Should Not Restrict AI Systems From Using Public Data
Commentary – Center for Data Innovation: “In March 2019, IBM created the “Diversity in Faces” dataset to provide a set of photos of peoples’ faces of various ages and ethnicities to help reduce bias in facial recognition systems. Even though IBM compiled the dataset from photos people shared online with a license which allows others to use the images for any purpose, some people strongly objected because IBM did not explicitly ask people for permission to use their photos in this dataset. NBC News even called it “facial recognition’s ‘dirty little secret.’” While this characterization is profoundly misleading (it was an effort to reduce bias in facial recognition, which is hardly “dirty,” and IBM was very public about the source of this data), this controversy highlights the challenge organizations face in creating datasets for AI, even when they have lawful access to the data, and the need for government to play a larger role in compiling data for computational uses…”

What kind of Terminator do we want?
A Path Towards Reasonable Autonomous Weapons Regulation
Editor’s Note: The debate on autonomous weapons systems has been escalating over the past several years as the underlying technologies evolve to the point where their deployment in a military context seems inevitable. IEEE Spectrum has published a variety of perspectives on this issue. In summary, while there is a compelling argument to be made that autonomous weapons are inherently unethical and should be banned, there is also a compelling argument to be made that autonomous weapons could potentially make conflicts less harmful, especially to non-combatants. Despite an increasing amount of international attention (including from the United Nations ), progress towards consensus, much less regulatory action, has been slow. The following workshop paper on autonomous weapons systems policy is remarkable because it was authored by a group of experts with very different (and in some cases divergent) views on the issue. Even so, they were able to reach consensus on a roadmap that all agreed was worth considering.

Those who study history are doom to create the best AI?
What Do Machine Learning and Hunter-Gatherer Children Have in Common?
Hunter-gatherer communities in Congo, where I do my field research, do not often give direct instructions when teaching their children. Instead, they create a learning opportunity, like providing a tool, and monitor the child’s action without interfering. The child then adjusts her behavior according to the feedback she receives based on her performance. Likewise, neural networks work by giving an opportunity for the machine to learn (i.e., input) and providing feedback based on the output obtained by the network structure.
The ultimate goal in AI research is to generate artificial general intelligence (AGI), that is a machine that can understand and learn as we humans do. Many AI researchers, like the DeepMind team, believe that this will be possible through more independent learning strategies. In unsupervised learning, for example, machines learn by observing data without a predetermined goal or explicit guidance. This form of learning is parallel to how hunter-gatherer children learn most skills.

No link to the report?
New survey shows American workers are actually excited about AI for this big reason, a visual tool that simplifies the way teams work, released a report on the state of automation and Artificial Intelligence that surveyed 1,000 employed Americans on their thoughts about automating workplace tasks.
A majority of the workforce (54%) believes they would save over five hours from tools that automate tasks.
… Automation is even becoming something that job-seekers look for as part of the package.
“It’s becoming part of a benefit,” Burns said. “If you go out there looking the marketplace, you can see people talking about, What systems are you using to automate your workflows?”

Why? Are AI systems clamoring to buy books by AI authors? is a bookshop entirely created by artificial intelligence
Melding the disparate worlds of art and computer science, Andreas Refsgaard and Mikkel Loose have developed a fascinating AI project called, an online bookstore entirely generated by artificial intelligence. Every aspect of the site is generated by machine learning algorithms, from the entire books and accompanying cover artwork, to the reviews and pictures of people reviewing the books. And on top of that, all the books are actually available to buy on Amazon.
The duo were not interested in generating a new machine learning model, but instead used the project to aggregate a variety of different preexisting models into a singular outcome. So, for example, the books and accompanying reviews were generated using a freely available character-based recurrent neural network called char-rnn. The images of the reviewers faces were generated using a different model, and the book covers used yet another model. Even the books’ prices were set by a neural network trained on book prices from Amazon.

How to follow The Donald?
How to create RSS Feeds from Twitter “Twitter is a great tool to stay up-to-date with everything that is happening: news, hobbies and interests, celebrities and influencers. However, some users prefer to consume and monitor this information via RSS feeds using RSS readers or custom integrations within their own apps. allows users to create RSS feeds from any public Twitter user feed, hashtag, at mention or search keyword, as well as feeds of their own Twitter timelines without writing a single line of code. Here are three options on how to create these feeds…”

Time to start collecting?
The perfect combination of art and science’: mourning the end of paper maps
UK Guardian – Digital maps might be more practical in the 21st century, but the long tradition of cartography is magical – “Some for one purpose and some for another liketh, loveth, getteth, and useth Mappes, Chartes, & Geographicall Globes.” So explained John Dee, the occult philosopher of the Tudor era. The mystical Dr Dee would, perhaps, have understood the passion stirred by Geosciences Australia’s recent decision to stop producing or selling paper versions of its topographic maps in December, citing dwindling demand…”