Saturday, January 18, 2020

A private face?
Facebook is ordered to hand over data about thousands of apps that may have violated user privacy
A Massachusetts judge has ordered Facebook to turn over data about thousands of apps that may have mishandled its users’ personal information, rejecting the tech company’s earlier attempts to withhold the key details from state investigators. The decision amounted to a significant victory for Massachusetts Attorney General Maura Healey who said that Facebook users — and local watchdogs — “have a right to know” whether their privacy has been violated.

Another privacy concern to add to future privacy acts?
AP reports:
Florida lawmakers advanced a proposal Thursday that would bar life insurers from using information from commercially available genetic tests to deny policies or set premiums based on markers that might be discovered through DNA home kits.
The effort comes amid the booming popularity of heavily marketed genetic testing and the rising concerns from privacy groups and lawmakers.
Read more on Shelton Herald.

LEAK: Commission considers facial recognition ban in AI ‘white paper’
The European Commission is considering measures to impose a temporary ban on facial recognition technologies used by both public and private actors, according to a draft white paper on Artificial Intelligence obtained by EURACTIV.

Interesting question.
Why Twitter May Be Ruinous for the Left
It’s a machine for misunderstanding other people’s ideas and identities. How do you even organize that?

Want optimized AI? Rethink your storage infrastructure and data pipeline
Most discussions of AI infrastructure start and end with compute hardware — the GPUs, general-purpose CPUs, FPGAs, and tensor processing units responsible for training complex algorithms and making predictions based on those models. But AI also demands a lot from your storage. Keeping a potent compute engine well-utilized requires feeding it with vast amounts of information as fast as possible. Anything less and you clog the works and create bottlenecks.
Optimizing an AI solution for capacity and cost, while scaling for growth, means taking a fresh look at its data pipeline. Are you ready to ingest petabytes worth of legacy, IoT, and sensor data? Do your servers have the read/write bandwidth for data preparation? Are they ready for the randomized access patterns involved in training?
Answering those questions now will help determine your organization’s AI-readiness.

Yeah… No!
Life will soon be like ‘Her’ — and we’ll fall in love with AI
… Dr. Maciej Musial from the University of Adam Mickiewicz in Poznan, Poland has pointed out that people will soon fall into the arms of humanoid robots and artificial intelligence apps on our smartphones. The evidence of this can be found in the fact that people are already seen growing attached to their gadgets such as smartphones. The research further suggested that a new phenomenon becoming frequent is the underlying formation of emotional relationships between humans and artificial intelligence under different disguises.
… David Hanson, who created the famous lifelike Sophia Robot recently revealed that humans are only a few decades away from marrying droids. There is already the kind of robot in the world today that overcome the bridge of intimacy, which is required for a deep emotional partnership. The researcher suggests that humanoids will get the same rights as humans by the year 2045. This would include the right to own land, vote in general elections, and even marry.
Hanson also suggests that by the year 2035, robots will be able to accomplish almost everything that humans do. They might even start their own ‘Global Robotic Civil Rights Moments’ by 2038 and compel leaders to provide them with equal status in the human world.

AI for the defense? Why “cute?”
This Company Made a 'Cute' AI Lawyer to Deploy 'Information Warfare' for Divorced Men
A man who feels wronged by his ex-wife thinks he can help ex-husbands everywhere with an artificially intelligent legal assistant that collects public court records to help clients file lawsuits and predicts what the opposing legal team will do next.
He also gave this piece of software a female avatar, a woman in a pencil skirt and heels he named Justine Falcon.

For the toolkit.
Microsoft Introduces Free Source Code Analyzer
Called Microsoft Application Inspector, the new tool doesn’t focus on discovering poor programming practices in the analyzed code. Instead, it looks for interesting features and metadata, such as cryptography, connections to remote resources, and the underlying platform.
Application Inspector was released in open source and is available for download from Microsoft’s GitHub repository.

Friday, January 17, 2020

Could Equifax have secured its data for less than $1 Billion? Is $1 Billion enough to guarantee future security?
Equifax Ordered to Spend $1 Billion on Data Security Under Data Breach Settlement
On January 13, 2020, a federal court approved the proposed settlement for the class action suit filed against Equifax over the massive data breach it disclosed in September 2017.
As per the settlement, the credit reporting agency “will pay $380,500,000 into a fund for class benefits, attorneys’ fees, expenses, service awards, and notice and administration cost.” Attorneys have been awarded nearly $80 million.
If the amount proves insufficient, the company will pay an additional $125 million for claims for out-of-pocket losses, “and potentially $2 billion more if all 147 million class members sign up for credit monitoring,” the court’s final approval order reads (PDF).
The court also revealed that Equifax has agreed “to spend a minimum of $1 billion for data security and related technology over five years and to comply with comprehensive data security requirements,” which should reduce the likelihood of a similar data breach in the future.

Why not inform all the players?
FBI Changes Policy for Notifying States of Election Systems Cyber Breaches [paywall] – “The Federal Bureau of Investigation will notify state officials when local election systems are believed to have been breached by hackers, a pivot in policy that comes after criticism that the FBI wasn’t doing enough to inform states of election threats.
The FBI’s previous policy stated that it notified the direct victims of cyberattacks, such as the counties that own and operate election equipment, but wouldn’t necessarily share that information with states. Several states and members of Congress in both parties had criticized that policy as inadequate and one that stifled state-local partnerships on improving election security…”

An example of ‘undue reliance?”
Criminals are using ‘Frankenstein identities’ to steal from banks and credit unions
  • So-called synthetic identity fraud is the fastest-growing financial crime, according to the Federal Reserve, driven in part by lending moving online. It’s also one of the hardest to detect.
  • Instead of outright stealing an identity, a criminal makes one up in what’s sometimes called a “Frankenstein” identity. The criminal then spends years building up credit under a fake alias.
  • It’s a really long con and an expensive one,” says Naftali Harris, co-founder and CEO of San Francisco-based start-up SentiLink. “But once you have this fake person who has an 800 credit score, you can then use that to get multiple high limit credit cards and unsecured loans from banks.”

Should we block phishy emails?
These subject lines are the most clicked for phishing
(This also represents the actual capitalization and spelling used in the original phishing subject lines.)
  1. Change of Password Required Immediately 26%
  2. Microsoft/Office 365: De-activation of Email in Process 14%
  3. Password Check Required Immediately 13%
  4. HR: Employees Raises 8%
  5. Dropbox: Document Shared With You 8%
  6. IT: Scheduled Server Maintenance – No Internet Access 7%
  7. Office 365: Change Your Password Immediately 6%
  8. Avertissement des RH au sujet de l'usage des ordinateurs personnels 6%
  9. Airbnb: New device login 6%
  10. Slack: Password Reset for Account 6%

We need all the help we can get.
French Supervisory Authority Publishes Second Guidance on Cookies and Similar Technologies
On January 14, 2020, the French Supervisory Authority (“CNIL”) published a new draft guidance on the use of cookies and similar technologies on websites and applications (see here, in French). The draft guidance is open for public consultation until February 25, 2020.
In its nine articles, the guidance sets out how to properly inform users and collect their consent in this context. For each requirement, the guidance provides examples and best practices.

Seeking agreement...
8 ways to ensure your company's AI is ethical
Workday recently published our Commitments to Ethical AI to show how we operationalize principles that build directly on our core values of customer service, integrity and innovation. Based on our experiences, here are eight lessons for technology companies looking to champion those principles across their organization:
1. Define what 'AI ethics' means.
2. Build ethical AI into the product development and release framework.
3. Create cross-functional groups of experts
4. Bring customer collaboration into the design, development and deployment of responsible AI.
5. Take a lifecycle approach to bias in machine learning.
6. Be transparent.
7. Empower your employees to design responsible products.
8. Share what you know and learn from others in the industry.

Thursday, January 16, 2020

For my security students.
Ransomware Costs in 2019
In 2019, the U.S. was hit by an unprecedented and unrelenting barrage of ransomware attacks,” said Emsisoft’s The State of Ransomware in the US: Report and Statistics 2019. The ransomware costs of 2019 are higher than they ever have been, and are expected to rise even further in 2020.
The ransomware attacks at least 966 government agencies, educational establishments and healthcare providers. To be more specific:
  • 113 state and municipal governments and agencies
  • 764 healthcare providers
  • 89 universities, colleges and school districts. This means that up to 1,233 individual schools were affected.
It’s hard to know exactly how much the costs of a ransomware attack is, but Emsisoft estimates that the costs in 2019 alone could have exceeded $7.5 billion.

Not incentivized by 4% of global revenue?
Companies Use 'Dark Patterns' to Mislead Users About Privacy Law, Study Shows
Passed in May of 2018, Europe’s General Data Protection Regulation (GDPR) was supposed to usher in a new age of consumer privacy transparency and protection across Europe. Instead, researchers say companies have been tap dancing around the law with little to no meaningful enforcement by European Union member countries and regulators.
A new joint study by researchers at MIT, UCL, and Aarhus University found that websites in the EU not only aren’t adhering to the law, many are using required privacy alerts to mislead users.

We’ll be trying to comply with many contradictory laws until Congress stops lollygagging.
State Privacy Trends to Watch in 2020
While all eyes are on California following the implementation of the California Consumer Privacy Act (“CCPA”) earlier this month and the start of enforcement later this year, other states are off to the privacy races already. On Monday, Washington State became the latest entrant with the introduction of a revised Washington Privacy Act.
From the proposals introduced so far this year in Washington, Virginia, New Hampshire, Illinois, and Nebraska, it is clear that states will continue to follow last year’s trend of varied approaches to state privacy legislation.

A different path to a privacy law?
Ottawa considering 'significant and meaningful' compensation for privacy breach victims
Mandate letters for Innovation, Science and Industry Minister Navdeep Bains and Heritage Minister Steven Guilbeault say they've been asked by Prime Minister Justin Trudeau to work on a "digital charter" that would include legislation to give Canadians "appropriate compensation" when their personal data is breached.
It's not clear when the legislation will be introduced, or what a compensation package would even look like, but Bains said it will include punitive fines for those found guilty of breaching personal data.
"It will be significant and meaningful to make it very clear that privacy is important. Compensation, of course, is one aspect of it," said Bains, adding that the government also wants "to demonstrate to businesses very clearly that there are going to be significant penalties for non-compliance with the law. That's really my primary goal."
Statistics Canada says that about 57 per cent of Canadians online reported experiencing a cyber security incident in 2018.
Ryan Berger, a privacy lawyer with Lawson Lundell in Vancouver, said legislating compensation could get private companies to start taking privacy more seriously.
"It will incentivize organizations ... to take steps to protect that information and ensure that, for instance, health information is encrypted," he said.

For everyone.
Verizon Media launches OneSearch a privacy-focused search engine
VentureBeat: “Verizon Media, the media and digital offshoot of telecommunications giant Verizon, has launched a “privacy-focused” search engine called OneSearch. The launch comes at a time when public trust in big technology companies has hit rock bottom following countless reports of breaches, lapses, and data harvesting escapades. Consequently, “privacy” is pretty much the buzzword of choice emanating from most of the big tech companies, and with its new search engine, it’s clear that Verizon is adopting a similar tack. With OneSearch, Verizon promises there will be no cookie tracking, no ad personalization, no profiling, no data-storing, and no data-sharing with advertisers…”

A focus on facial recognition.
FPF Director of AI & Ethics Testifies Before Congress on Facial Recognition
In a hearing today before the House Committee on Oversight and Reform, Future of Privacy Forum (FPF) Senior Counsel and Director of AI and Ethics Brenda Leong testified on the privacy and ethical implications of the commercial use of facial recognition technology.
To read Leong’s written testimony, click here. For an archived livestream of the committee hearing, visit

Wednesday, January 15, 2020

Imagine the same level of success hacking a major airline. Would insurance cover the loss of business? The problem with less-than-full disclosure is that we don’t know what to prepare for.
Impact of Cyber Attacks on RavnAir More Damaging Than First Thought; Flights May Be Grounded for a Month
It had been thought that the company recovered fairly quickly from the malicious cyber attack, but a statement released just before the new year kicked off indicates that the company may have more delayed and canceled flights into February.
… During the weekend prior to Christmas, an unspecified cyber attack targeted the company’s Dash 8 passenger flights and caused about six of them to be grounded over the busy weekend as a security precaution.
… The FBI and an unspecified third-party cyber security company have been called in to investigate the impact of the cyber attacks on Ravn as the company is working on restoring everything.
As with the recent attack on Travelex, the company has opted to keep details about the attack very scanty. But, as with Travelex, ransomware seems to be a fairly safe assumption given the patterns of disruption to service and the long expected recovery period.

Some numbers. Interesting because of the companies based there.
Washington State Attorney General’s Office 2019 Data Breach Report
For those who may not know, Washington State produces its own data breach report annually. Here’s a snippet from their report:
In 2019, the total number of breaches reported to our office increased by nearly 20%, with just over 70% resulting from a malicious cyberattack.
Yep, the percentage increase in number of incidents/reports sounds about right.
The lifecycle of breaches increased dramatically, rising from an overall average of 139 days in 2018 to 277 days in 2019. This was largely driven by a huge in spike in the amount of time it took organizations to discover that a breach had occurred.
Interesting, because ransomware attacks are recognized quickly, but may take longer to resolve. Similarly, it may take entities months to find out who had PII in an employee’s email account that had been compromised.
So there’s lots to think about and talk about. You can access the state’s 2019 report here. What I found stunning was the number of breaches reported to the state for a one-year period. But then, the number of reports is at least partly a function of how state law defines a reportable breach.

Clearly, the Fed is a major target.
A cyberattack on a major US financial institution would affect more than a third of bank assets, New York Fed warns
A sophisticated cyberattack on the US could ripple through major banks and severely disrupt the broader financial system, according to new research from the New York Federal Reserve.
A cyberattack on the data or systems of any one of the five most active banks could spill over to others and affect more than a third of assets in the overall network, analysts Thomas Eisenbach, Anna Kovner, and Michael Junho Lee said in the staff report this week.
"The reconciliation and recuperation process would be an unprecedented task," the paper said. "This could have severe implications on the stability of the broader financial system vis-À-vis spillovers to investors, creditors, and other financial market participants."

Social engineering based on known vulnerabilities.
Don't fall for this Google Nest sextortion scam
Scammers have been targeting people with Google Nest security camera footage as part of a widespread 'sextortion' campaign, according to Computer Weekly.
Affecting 1,700 people (mainly in the US), the scam was uncovered by email cyber security company, Mimecast, which said that the campaign started in early January.
A sextortion email scam is when perpetrators claim to have compromising footage of the victim – which they'll then surrender once they have been paid.
According to Addison, these emails can be safely ignored. She explained: “The campaign is exploiting the fact people know these devices can be hacked very easily and preying on fears of that.”
It is now widely known that many IoT (Internet of Things) devices lack basic security and are vulnerable to hacking, meaning that victims are more likely to believe the fraudsters’ claims, since the possibility of their device having really been hacked is highly plausible."
How the scammers gained access to the victims' email addresses or the Google Nest footage is unclear.

I’m increasingly concerned that the next war will be digital and most people won’t even recognize it when they see it. This is merely a start.
'We want to win the next war': US Army will revamp cyber operations to counter Russia and China
As warfare continues to enter the digital realm, the Army plans to transform its cyber operations branch into a full-scale information warfare command, according to a top U.S. general.
The service will convert Cyber Command into the Army Information Warfare Command, Army Chief of Staff Gen. James McConville said at a panel on Tuesday. It’s one of the several modernization efforts the Army is taking on to counter "great power" opponents like Russia and China.

Companies increasingly reporting attacks attributed to foreign governments
More than one in four security managers attribute attacks against their organization to cyberwarfare or nation-state activity, according to Radware.

Open source…
How digital sleuths unravelled the mystery of Iran’s plane crash
Wired – Open-source intelligence proved vital in the investigation into Ukraine Airlines flight PS752. Then Iranian officials had to admit the truth: “..It’s not unusual nowadays for OSINT to lead the way in decoding key news events. When Sergei Skripal was poisoned, Bellingcat, an open-source intelligence website, tracked and identified his killers as they traipsed across London and Salisbury. They delved into military records to blow the cover of agents sent to kill. And in the days after the Ukraine Airlines plane crashed into the ground outside Tehran, Bellingcat and The New York Times have blown a hole in the supposition that the downing of the aircraft was an engine failure. The pressure – and the weight of public evidence – compelled Iranian officials to admit overnight on January 10 that the country had shot down the plane “in error”.
So how do they do it? “You can think of OSINT as a puzzle. To get the complete picture, you need to find the missing pieces and put everything together,” says Loránd Bodó, an OSINT analyst at Tech versus Terrorism, a campaign group. The team at Bellingcat and other open-source investigators pore over publicly available material. Thanks to our propensity to reach for our cameraphones at the sight of any newsworthy incident, video and photos are often available, posted to social media in the immediate aftermath of events. (The person who shot and uploaded the second video in this incident, of the missile appearing to hit the Boeing plane was a perfect example: they grabbed their phone after they heard “some sort of shot fired”.) “Open source investigations essentially involve the collection, preservation, verification, and analysis of evidence that is available in the public domain to build a picture of what happened,” says Yvonne McDermott Rees, a lecturer at Swansea University…”

How long before this technology is banned? (Unless the manufacturer is willing to give the FBI a backdoor?)
How to be anonymous in the age of surveillance
The Seattle Times: “Cory Doctorow’s sunglasses are seemingly ordinary. But they are far from it when seen on security footage, where his face is transformed into a glowing white orb. At his local credit union, bemused tellers spot the curious sight on nearby monitors and sometimes ask, “What’s going on with your head?” said Doctorow, chuckling. The frames of his sunglasses, from Chicago-based eyewear line Reflectacles, are made of a material that reflects the infrared light found in surveillance cameras and represents a fringe movement of privacy advocates experimenting with clothes, ornate makeup and accessories as a defense against some surveillance technologies. Some wearers are propelled by the desire to opt out of what has been called “surveillance capitalism” — an economy that churns human experiences into data for profit — while others fear government invasion of privacy…
Today, artificial intelligence (AI) technology, such as facial recognition, has become more widespread in public and private spaces — including schools, retail stores, airports, concert venues and even to unlock the newest iPhones. Civil-liberty groups concerned about the potential for misuse have urged politicians to regulate the systems. A recent Washington Post investigation, for instance, revealed FBI and Immigration and Customs Enforcement agents used facial recognition to scan millions of Americans’ driver’s licenses without their knowledge to identify suspects and undocumented immigrants…”

Train your dragon.
Stanford Researchers Publish AI Index 2019 Report
The Stanford University Human-Centered Artificial Intelligence Institute published its AI Index 2019 Report. The 2019 report tracks three times the number of datasets as the previous year's report and contains nearly 300 pages of data and graphs related to several aspects of AI, including research, technical performance, education, and societal considerations.
The report is the result of an effort led by the Institute's AI Index Steering Committee, a team of researchers and industry experts chaired by AI21Labs co-founder Yoav Shoham. This is the report's third year, and it includes updates of previous metrics as well as new ones. In addition to the report, the committee has released two web-based tools: the Global AI Vibrancy Tool for comparing data across countries, and the arXiv Monitor for searching pre-print research papers to track technical metrics.   According to the Committee's web site, the Index's mission is:
to provide unbiased, rigorous, and comprehensive data for policymakers, researchers, journalists, executives, and the general public to develop a deeper understanding of the complex field of AI.

Tuesday, January 14, 2020

For my Ethical Hacking students.
Tesla hacking competition offers $1 million and free car if someone can hijack Model 3
San Francisco: Electric automaker Tesla has once again challenged hackers to find bugs in its connected cars.
The Elon Musk-run company is returning to the annual hackers' competition "Pwn20wn" to be held in Vancouver in March, reports electrek.
Some Model 3 cars and $1 million in award money will be up for grabs.

It’s going to be a busy year.
Future of Privacy Forum Releases Analysis of Washington Privacy Act

It’s Raining Privacy Bills: An Overview of the Washington State Privacy Act and other Introduced Bills
Today, on the first day of a rapid-fire 2020 legislative session in the state of Washington, State Senator Carlyle has introduced a new version of the Washington Privacy Act (WPA).

Perhaps “I know it when I see it” won’t work for AI.
AI education for the policy community, from military leaders to Hill staffers to senior government officials, needs to happen now, as cases outside of the national security sphere illustrate. For example, in December 2017, the New York City council unanimously passed a bill that required the Office of the Mayor to form a task force dedicated to increasing transparency and oversight of the algorithms used by the city. A local city council member characterized the bill as essential, saying of the algorithms, “I don’t know what it is. I don’t know how it works. I don’t know what factors go into it. As we advance into the 21st century, we must ensure our government is not ‘black boxed.’”
This example touches on a key dynamic: Top policymakers — who are generally not technically trained — are at an increasing risk of being “black boxed” as technological complexity increases. This is especially true given questions even at the vanguard of AI research about the “explainability” of algorithms. However, organizational decisions about AI adoption and applications that “generally shape the impact of that technology” are being made today. Section 256 of the FY20 NDAA provides a first step towards national security literacy in AI, requiring the Secretary of Defense to develop an AI education strategy for military servicemembers. But there are questions about implementation, given well-known issues with professional military education, and the national security community more broadly requires AI literacy, not just military service members, since most policymakers are not in uniform.

It’s a letter, not a text.
Profs say teaching students how to email them properly is gift that keeps on giving
Journal of Higher Ed: “Somewhere between birth and college, students hopefully have learned how to compose concise, grammatically correct and contextually appropriate emails. Often they haven’t. So, to head off 3 a.m. need-your-help-now emails from Jake No Last Name, many professors explicitly teach students how to email them at the start of the academic year. Approaches vary. A number of professors use specific reference documents. A pointer webpage called How to Email a Professor” posted by Michael Leddy in 2005 is still quite popular, with professors and individual students: by early last year it had been accessed some 675,000 times and accessed from 135 countries and territories. Biologist and “Seven-Minute Scientist” Amy B. Hollingsworth wrote “Five Ways to Get a Busy Professor to Answer Your Emails That Don’t Involve a Bribe.” A 2015 op-ed co-written by two Southeastern University professors of English is still sometimes one of the most-read articles on Inside Higher Ed. The University of California, Santa Cruz, offers advice about emailing research professors here. And Laura Portwood-Stacer’s template. published on Medium in 2016, has lots of fans….”

Also for my students.
5 Free CV Apps to Create a Beautiful Resume That Recruiters Will Read