Saturday, February 01, 2020


Another evolving scam. The low probability of a shotgun approach is refined by a little research.
Ashley Madison cyber-breach: 5 years later, users are being targeted with ‘sextortion’ scams
… Researchers at email security company Vade Secure found the new scam earlier this year, when they saw a small number of targeted emails with apparent information from Ashley Madison breach victims. The scam emails seemed to be well researched, with not just the users’ email addresses but information like when the victim signed up, their username, and their interests they entered on the site, said Adrien Gendre, chief product officer for Vade Secure.
The threats are a worrying evolution of the sextortion scam because they appear to incorporate real information.
In the most typical version of sextortion, fraudsters make dubious, fictional claims about you via email. They say they’ve recorded you in a compromising position through your computer or that they have pictures of an alleged affair you are having. In those cases, the criminals blast out thousands of similar-sounding emails in hopes of persuading just one person to fall for the trick and make a requested extortion payment. The recordings and affairs are almost always nonexistent.
But in the new Ashley Madison cases, Gendre said the scammers are using carefully selected information that appear to be from real Ashley Madison subscribers, and piecing that information into more precisely targeted emails to those individuals. The ransomers then demand around $1,000 in bitcoin to keep the information silent. The grain of truth to their pitch sets the scam apart.




For my students.
5 Free Guides to Understand Digital Security and Protect Your Privacy




Something they could have done from the beginning if they had thought of it.
Ring has begun pushing out an update to its phone app with the aim of consolidating all of its security settings, a likely response to general privacy concerns, as well as more specific ones about “hackers ” who’ve hijacked in-home camera feeds in recent months.
The changes, teased at CES 2020, include implementation of a “Control Center” within the Ring app that grants customers easy access to a variety of security options, including two-factor authentication—an easy-to-use feature that, as Gizmodo has reported, all but entirely prevents cameras from being hijacked remotely.




Not sure I agree.
As automated technologies quickly and methodically climb out of the uncanny valley, customer service calls, website chatbots, and interactions on social media may become progressively less evidently artificial.
This is already happening. In 2018, Google demoed a technology called Duplex, which calls restaurants and hair salons to make appointments on your behalf. At the time, Google faced a backlash for using an automated voice that sounds eerily human, even employing vocal ticks like “um,” without disclosing its robotic nature. Perversely, today’s Duplex has the opposite problem. The automated system does disclose itself, but at least 40% of its calls have humans on the phone, and it’s very easy for call recipients to confuse those real people with AI.
As I argue in a new Brookings Institution paper, there is clear and immediate value to a broad requirement of AI disclosure in this case and many others.




Russia wants full control of its tech areas?
Apple has a Vladimir Putin problem
In November 2019, Russian parliament passed what’s become known as the “law against Apple.” The legislation will require all smartphone devices to preload a host of applications that may provide the Russian government with a glut of information about its citizens, including their location, finances, and private communications.
Apple typically forbids the preloading of third-party apps onto its system’s hardware. But come July 2020, when the law goes into effect, Apple will be forced to quit the country and a market estimated at $3 billion unless it complies. This piece of legislation, along with a controversial law aimed at the construction of a “sovereign internet,” is the latest step in Vladimir Putin’s ongoing encroachment into digital space—and has brought Apple into direct conflict with the autocratic Russian president.




To amuse my students.
NSA Security Awareness Posters
From a FOIA request, over a hundred old NSA security awareness posters. Here are the BBC's favorites. Here are Motherboard'sfavorites.



Friday, January 31, 2020


This is not Newton’s Third Law. For every force the reaction does not have to be equal, or in a direction we can predict.
If the US launches cyberattacks on Iran, retaliation could be a surprise
On the morning of Jan. 8, the Islamic Revolutionary Guards Corps fired 22 surface-to-surface missiles at two Iraqi airbases. If Americans had died, the Pentagon would have put in front of President Trump options for cyberattacks to disable Iran’s oil and gas sector.
Would the U.S. oil and gas industry have been ready for an Iranian cyber counterattack?
While Americans celebrated Thanksgiving, someone hit Iran with a massive cyberattack that disclosed 15 million Iranian bank debit card numbers on a social media site. On Dec. 11, Iran’s telecommunication minister admitted this was “very big” and that a nation-state carried it out.
Will U.S. banks and credit card companies be ready if Iran tries to hack the card numbers of millions of Americans?
The Trump Administration uses sanctions and cyberattacks as their go-to tools against Iran. U.S. officials have admitted twice on background to recent cyberattacks on Iran.
The implication that cyberattacks are somehow a safer response for the United States than kinetic attacks is dangerous. Iran will retaliate, and the cyber defenses of Iran’s likely targets in the United States are uneven. More needs to be done to prepare the American people for Iranian cyber retaliation.




A sophisticated twist on the classic “man in the middle.”
Hacker snoops on art sale and walks away with $3.1m, victims fight each other in court
Each impacted party is claiming the other is responsible for not detecting the scam.
As reported by Bloomberg, London-based veteran art dealer Simon Dickinson and Rijksmuseum Twenthe were in the midst of negotiations over the acquisition of a valuable painting by John Constable, a 1700 - 1800's landscape painter from England.
Conversations took place over email for months, and at some point during the talks, cybercriminals sent spoofed messages to the museum and persuaded Rijksmuseum Twenthe to transfer £2.4 million ($3.1 million) into a bank account from Hong Kong.
In the aftermath of the scam, both Simon Dickinson and Rijksmuseum Twenthe are claiming the other side is responsible.
A lawsuit has been launched at a London High Court. The museum, based in Enschede, the Netherlands, claims that the art dealer's negotiators were roped into some of the spoof emails, and yet did not spot the scam.
The museum's lawyer has argued that this silence should be considered "implied representation," according to the publication.
In response, Simon Dickinson says that the dealer did not detect the presence of the eavesdropper and the museum should have double-checked the bank details before transferring any cash.
Each side is also accusing the other of being the source of the theft by allowing their systems to be compromised in the first place.




Patch. Not even the big boys get it right every time.
Severe ‘Perfect 10.0’ Microsoft Flaw Confirmed: ‘This Is A Cloud Security Nightmare’
Microsoft quickly fixed the vulnerability when Check Point approached them in the fall, and customers who have patched their systems are now safe. The vulnerability is as punchy as it gets, “a perfect 10.0,” Balmas says, referring to the CVE score on Microsoft’s disclosure in October. “It’s huge—I can’t even start to describe how big it is.” The reason for the hyperbole is that Balmas says his team found the first remote code execution (RCE) exploit on a major cloud platform. One user could break the cloud isolation separating themselves and others, intercepting code, manipulating programs. That isolation is the basis of cloud security, enabling the safe sharing of common hardware.
There was no detail when Microsoft patched the flaw, just a short explainer.




For those (and I’m talking to you lawyers in particular) who thought there was no need to encrypt your email…
Ray Schultz reports:
A privacy bill that addresses email only has been introduced in the Oklahoma State Legislature.
House Bill 2810, the so-called Oklahoma Email Communication Content Privacy Protection Act, would prohibit email service providers from scanning subject lines or the body of any email communication sent to its users, and from letting any other entity do so.
Read more on MediaPost.




This week I will teach my students to generate public/private RSA keys, with no backdoor. Will I get a visit from the FBI?
Todd Feathers reports:
The US government is once again reviving its campaign against strong encryption. demanding that tech companies build backdoors into smartphones and give law enforcement easy, universal access to the data inside them.
At least two companies that sell phone-cracking tools to agencies like the FBI have proven they can defeat encryption and security measures on some of the most advanced phones on the market. And a series of recent tests conducted by the National Institute of Standards and Technology (NIST) reveal that, while there remain a number of blind spots, the purveyors of these tools have become experts at reverse engineering smartphones in order to extract troves of information off the devices and the apps installed on them.
Read more on Vice.




The argument continues.
Why We Should Ban Facial Recognition Technology




The job my students face keeps growing. Something they have noticed.
Data Classification: Not Just for CISOs Anymore
Data classification has always been regarded as a foundational element of any viable data security strategy. After all, most organizations are creating, utilizing and storing more potentially sensitive data than ever before.
The emergence of compliance guidelines and data privacy mandates, such as General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), puts data classification front and center. The necessity of classifying data has grown as organizations must ensure their data is compliant and protected.
At the same time, data classification is proving to have equally valuable implications for corporate privacy initiatives. Because of this, some elements of data classification are moving beyond the realm of the Chief Information Security Officer (CISO) to involve the Chief Privacy Officer (CPO), who is beginning to shoulder more of this responsibility.
These security stakeholders come from different backgrounds and places on the organization chart, yet both bring important perspectives. Rather than engage in meaningless turf wars, savvy CPOs and CISOs increasingly are forming strategic partnerships to elevate data security throughout organizations. It may take time for elements of the new CISO-CPO paradigm to jell, but the common rallying point is a shared reason for being: Safeguarding the organization’s employees, brand and image.




One example – insurance claims.
What’s the Big Deal about Privacy?
With the rapid expansion of technology entering every field of business, manufacturers and service providers are being presented with previously unconsidered opportunities to reap value from the reuse and repurpose of data initially collected and harvested for other reasons. Learned intelligence through artificial intelligence (AI) systems provides value for the processor not previously realized or recognized in transactions. This is particularly true when considering how AI companies that work with insurers to optimize their claims processing are left with a valuable resource after the data collection is complete. This article addresses how the value of a neural network has been ignored and should be considered when an insurer considers outsourcing its claims processing.¹




Perspective.
Emerging Trends: What to Expect From Privacy Laws in 2020



Thursday, January 30, 2020


Should we assume that Facebook’s lawyers failed to properly estimate the risk or that Facebook’s managers chose to roll the dice?
Facebook may pay Illinois users a couple of hundred dollars each in $550 million privacy settlement
Facebook will pay $550 million to Illinois users to settle allegations that its facial tagging feature violated their privacy rights.
The settlement — which could amount to a couple of hundred dollars for each user who is part of the class-action settlement — stems from a federal lawsuit filed in Illinois  nearly five years ago that alleges the social media giant violated a state law protecting residents’ biometric information. Biometric information can include data from facial, fingerprint and iris scans.
Illinois has one of the strictest biometric privacy laws in the nation. The 2008 law mandates that companies collecting such information obtain prior consent from consumers, detailing how they’ll use it and how long it will be kept. The law also allows private citizens to sue.
… “We are expecting a record number of claims to be filed,” Edelson said. “But even with that, we think that the class members are going to get a good amount of money.”




Security and Architecture.
Why 2020 Will Be the Year Artificial Intelligence Stops Being Optional for Security
Artificial intelligence (AI) isn’t new. What is new is the growing ubiquity of AI in large organizations. In fact, by the end of this year, I believe nearly every type of large organization will find AI-based cybersecurity tools indispensable.
Artificial intelligence is many things to many people. One fairly neutral definition is that it’s a branch of computer science that focuses on intelligent behavior, such as learning and problem solving. Now that cybersecurity AI is mainstream, it’s time to stop treating AI like some kind of magic pixie dust that solves every problem and start understanding its everyday necessity in the new cybersecurity landscape. 2020 is the year large organizations will come to rely on AI for security.
AI isn’t magic, but for many specific use cases, the right tool for the job will increasingly involve AI. Here are six reasons why that’s the case.




Perspective.
Collating Hacked Data Sets
Two Harvard undergraduates completed a project where they went out on the Dark Web and found a bunch of stolen datasets. Then they correlated all the information, and then combined it with additional, publicly available information. No surprise: the result was much more detailed and personal.
"What we were able to do is alarming because we can now find vulnerabilities in people's online presence very quickly," Metropolitansky said. "For instance, if I can aggregate all the leaked credentials associated with you in one place, then I can see the passwords and usernames that you use over and over again."
Of the 96,000 passwords contained in the dataset the students used, only 26,000 were unique.
"We also showed that a cyber criminal doesn't have to have a specific victim in mind. They can now search for victims who meet a certain set of criteria," Metropolitansky said.
For example, in less than 10 seconds she produced a dataset with more than 1,000 people who have high net worth, are married, have children, and also have a username or password on a cheating website. Another query pulled up a list of senior-level politicians, revealing the credit scores, phone numbers, and addresses of three U.S. senators, three U.S. representatives, the mayor of Washington, D.C., and a Cabinet member.
"Hopefully, this serves as a wake-up call that leaks are much more dangerous than we think they are," Metropolitansky said. "We're two college students. If someone really wanted to do some damage, I'm sure they could use these same techniques to do something horrible."
That's about right.
And you can be sure that the world's major intelligence organizations have already done all of this.




Not comprehensive, but it could be useful.
New web service can notify companies when their employees get phished
Starting today, companies across the world have a new free web service at their disposal that will automatically send out email notifications if one of their employees gets phished.
The service is named "I Got Phished " and is managed byAbuse.ch, a non-profit organization known for its malware and cyber-crime tracking operations.
Just like all other Abuse.ch services, I Got Phished will be free to use.
Subscribing for email notifications is done on a domain name basis, and companies don't have to expose a list of their employee email addresses to a third-party service.




Falls in the old “quality is free” category.
Investment in Privacy Pays Cybersecurity Dividends: Cisco
Cisco's 2020 Data Privacy Benchmark Study attempts to quantify an often-repeated claim from cybersecurity experts: investment in privacy improves overall cybersecurity. For example, last year's Cisco privacy study seemed to indicate that improved privacy improves vendors' sales cycle.
"A year ago," Robert Waitman, Cisco director of data valuation and privacy, security and trust, told SecurityWeek, "we found those organizations that were ready for GDPR did a better job when it came to streamlining their sales process. This is particularly so in B2B. With customers being more concerned and asking more questions about privacy, those companies with an effective privacy policy can more rapidly and efficiently answer those questions."
His conclusions from the Cisco Data Privacy Benchmark Study 2020 (PDF ) are clear. "Firstly," he told SecurityWeek, "companies should be honest and transparent about what they do with personal data. Secondly, privacy is a good corporate investment. There's now a lot of evidence suggesting that companies should go beyond the minimum possible to comply with the law, and seriously invest in privacy. Finally, the issue of privacy certifications is important."




We can be bad therefore we can detect bad in others?
Artificial intelligence, geopolitics, and information integrity
The present article explores the intersection of AI and information integrity in the specific context of geopolitics. Before addressing that topic further, it is important to underscore that the geopolitical implications of AI go far beyond information. AI will reshape defense, manufacturing, trade, and many other geopolitically relevant sectors. But information is unique because information flows determine what people know about their own country and the events within it, as well as what they know about events occurring on a global scale. And information flows are also critical inputs to government decisions regarding defense, national security, and the promotion of economic growth. Thus, a full accounting of how AI will influence geopolitics of necessity requires engaging with its application in the information ecosystem.
This article begins with an exploration of some of the key factors that will shape the use of AI in future digital information technologies. It then considers how AI can be applied to both the creation and detection of misinformation. The final section addresses how AI will impact efforts by nation-states to promote–or impede–information integrity.



Wednesday, January 29, 2020


Big, but not a record.
Wawa's massive card breach: 30 million customers' details for sale online
The Wawa breach may rank as one of the biggest of all time, comparable to earlier Home Depot and Target breaches.
A month before, in December 2019, Wawa disclosed a major security breach during which the company admitted that hackers planted malware on its point-of-sale systems. Wawa said the malware collected card details for all customers who used credit or debit cards to buy goods at their convenience stores and gas stations. The company said the breach impacted all its 860 convenience retail stores, of which 600 also doubled as gas stations.
According to Wawa, the malware operated for months without being detected, from March 4 until December 12, when it was removed from the company's systems.
The store chain also said "that only payment card information was involved, and that no debit card PIN numbers, credit card CVV2 numbers or other personal information were involved."
However, according to a sample of the Wawa card dump obtained by ZDNet, the card dump did include CVV2 numbers, despite Wawa's claims
Gemini experts said the Joker's Stash team is currently selling the details of US-issued cards for $17 per card, on average, while data for international cards is priced at a higher $210 per card.




Does my neighbor value my privacy more than the security of the packages Amazon leaves on his porch?
Amazon Engineer: ‘Ring should be shut down immediately and not brought back’
An Amazon software engineer named Max Eliaser is calling for the shutdown of Ring, the doorbell camera company Amazon paid $2 billion for in 2018.
They wrote:
The deployment of connected home security cameras that allow footage to be queried centrally are simply not compatible with a free society. The privacy issues are not fixable with regulation and there is no balance that can be struck. Ring should be shut down immediately and not brought back.
Those are strong words, but he’s not alone in thinking them. A growing contingency of civil rights advocates, surveillance experts, and pundits are working to raise awareness about the potential dangers of Ring’s doorbell cameras.
Ring sold nearly 400,000 units in the month of December, according to estimates.
...This indicates that privacy advocates are losing the battle against ubiquitous surveillance, something many feel could destroy the bedrock of democracy.
One of the biggest concerns with Ring cameras is that people who choose not to install one or participate in the local surveillance network (a connected community software system called “Neighborhoods” that gives police backdoor access to users’ footage) can’t choose to opt out.
If your neighbor has a Ring camera you can’t make them, Amazon, or the police exclude footage of you, your family, and your guests from their recordings. Any bad actor wishing to misuse or abuse the system – whether it’s an Amazon employee, police officer subverting your legal right to privacy, or a hacker seeking to cause you harm – only needs access to a camera nearby, even if you don’t own one.




Asking your hacker/thief to be honest? No indication that their cyber insurance company asked them to pay the ransom.
Denver’s Regis University paid ransom to “malicious actors” behind campus cyberattack
When “malicious actors” carried out a cyberattack on Regis University last August — crippling the Denver campus’s IT network and downing phones, email and Wi-Fi — university officials paid the hackers a ransom in hopes of restoring their incapacitated systems.
Yet even after that payment, which Regis leaders publicly revealed for the first time to The Denver Post, the cyberattack still impaired day-to-day operations at the private Jesuit college for months.
On Tuesday, Regis is holding a cybersecurity summit nearly six months after the university’s systems were hacked, gathering professionals from across the country to publicly talk about the ransomware attack and share what the institution and others impacted have learned, all in a bid to help prevent such incidents from happening again.


(Related) “Yes, we have information that will help you avoid ransomware. No, you can’t have it.” Must be no Jesuits in Baltimore.
After ransomware took Baltimore hostage, Maryland introduces legislation that bans disclosing the bugs ransomware exploits




Worth checking?
Facebook privacy tool gives users more info on how they are tracked
USAToday: “It’s been way overdue. But Facebook has finally released a long-promised tool that could give you more control over how the social network traces your path across the web. CEO Mark Zuckerberg announced the global availability of this “Off-Facebook Activity” tool in a blog post Tuesday on Data Privacy Day. It’s part of an effort to fix and rewrite Facebook’s poor scandal-riddled narrative on privacy. Facebook exploits information that businesses routinely share with Facebook about your activities when you’re beyond the virtual corridors of the social network to serve up ads customized to your interests. They use such business-oriented tools as Facebook Pixel, the Facebook SDK and the Facebook Login. But you need not sign into a site or app through Facebook Login for a business to share an interaction with Facebook. Other triggers include opening an app, adding an item to a shopping cart or making a donation. The Off-Facebook Activity tool that is now available across the Facebook network lets you view a summary of such apps and websites and ask Facebook to clear the past information about such activities. With a little bit of extra work, you can also ask Facebook to disassociate your future activity from your account…”




What can we learn, adapt or avoid?
How Technology Is Changing Health Care in India
Despite its shortcomings, India’s health care sector has a lot going for it on several fronts. A government-led push to get health care providers to embrace electronic medical records is enabling artificial intelligence (AI) to extract insights from patient data to deliver better treatment. The availability of telecom bandwidth is making medical expertise reach underserved rural markets through telemedicine and tele-consulting programs, delivered over mobile phones.



Tuesday, January 28, 2020


For my Computer Security managers. An ounce of prevention can save a ton of repair costs.
Berlin’s high court should rebuild computer system after Emotet infection, report finds
Berlin’s highest court should completely rebuild its computer infrastructure after hackers ran roughshod through the network and likely stole data in the process, according to a forensic report released Monday.
Poor security controls allowed the attackers to install two types of information-stealing malware last fall, said the study conducted by an IT subsidiary of Deutsche Telekom and released by German lawmakers investigating the incident.
A motivated attacker would have been able to use this network structure to infect almost every device,” the report states.




A ransomware minimum?
Federal agency offers guidelines for businesses defending against ransomware attacks
The National Institute of Standards and Technology (NIST) published draft guidelines Monday providing businesses with ways to defend against debilitating ransomware attacks.
The two draft practice guidelines to help firms create strategies to protect data in the event of an cyberattack.




Should th US do less?
IoT security: Your smart devices must have these three features to be secure
Proposed laws from the UK for Internet of Things security mean vendors will need to follow new rules to be considered secure.
the legislation would require that IoT devices sold in the UK must follow three particular rules to be allowed to sell products in the UK. They are:
  • All consumer internet-connected device passwords must be unique and not resettable to any universal factory setting
  • Manufacturers of consumer IoT devices must provide a public point of contact so anyone can report a vulnerability and it will be acted on in a timely manner
  • Manufacturers of consumer IoT devices must explicitly state the minimum length of time that the device will receive security updates at the point of sale, either in store or online




Architecture for AI.
AI IS DRIVING STORAGE DOWN NEW AVENUES
Storage systems are inherently data intensive. But the rapid emergence of artificial intelligence as a standard datacenter workload has storage vendors scrambling to design platforms that better meet the more stringent performance needs of these applications.
… Our recent conversation with Panasas storage architect Curtis Anderson, shows how AI has is driving these designs in new directions. The full interview filmed live at The Next HPC Platform can be found below.


(Related)
Why Apple And Microsoft Are Moving AI To The Edge
Artificial intelligence (AI) has traditionally been deployed in the cloud, because AI algorithms crunch massive amounts of data and consume massive computing resources. But AI doesn’t only live in the cloud. In many situations, AI-based data crunching and decisions need to be made locally, on devices that are close to the edge of the network.




Another look at ethics.
DAY ZERO ETHICS FOR MILITARY AI
Examining the legal, moral, and ethical implications of military artificial intelligence (AI) poses a chicken-and-egg problem: Experts and analysts have a general sense of the risks involved, but the broad and constantly evolving nature of the technology provides insufficient technical details to mitigate them all in advance. Employing AI in the battlespace could create numerous ethical dilemmas that we must begin to guard against today, but in many cases the technology has not advanced sufficiently to present concrete, solvable problems.
To this end, 2019 was a bumper year for general military AI ethics. The Defense Innovation Board released its ethical military AI principles; the National Security Commission on AI weighed in with its interim report; the European Commission developed guidelines for trustworthy AI; and the French Armed Forces produced a white paper grappling with a national ethical approach. General principles like these usefully frame the problem, but it is technically difficult to operationalize “reliability” or “equitability,” and assessing specific systems can present ambiguity — especially near the end of development.