Saturday, March 24, 2018

Riders and overriders…
U.S. Authorities Get Access to Data Stored on Overseas Cloud Servers
A $1.3 trillion spending deal that President Donald Trump signed Friday includes a measure that gives U.S. investigators access to data stored on overseas cloud servers, resolving a long-running legal battle between law enforcement and big tech companies.
But the measure drew widespread criticism from privacy and human-rights activists, who suggested U.S. tech companies—under pressure in Washington—had retreated on the issue. They also suggested the bill could leave data stored in the U.S. vulnerable to demands...

Every change has a downside? Probably not total doom, but some impact among informed users.
Facebook Fallout Could Deal Blow to Legitimate Academic Research
… The data at the center of the scandal was supposedly collected for research purposes through an online personality quiz app in 2014. Created by Cambridge University psychologist Aleksandr Kogan, the app required that users grant access to parts of their profiles, and in turn allow it to draw information about their friends as well. Kogan then turned that information — collected from tens of millions of users — over to Cambridge Analytica. Facebook claims that when it found out about the violation of its terms in 2015, it was assured the data had been deleted. (It hadn’t.)
… Although Facebook has cut back on the amount and types of data it shares with third-party companies since then, drawing information through apps like the one created by Cambridge Analytica is fairly routine. With tighter controls on this and other methods of data collection in the pipeline, however, researchers may soon encounter more red tape. Moreover, those who require audience engagement in their work may find it harder to recruit participants.
“People who consented thought it was for science,” Casey Fiesler, an assistant professor in the Department of Information Science at the University of Colorado Boulder, told The Guardian of Kogan’s quiz. “Will people stop wanting to participate in studies?”

(Related) Oh the horror, the horror. Ah… Wait. Maybe not.
Why Nothing Is Going To Happen To Facebook Or Mark Zuckerberg
As Facebook’s Cambridge Analytica scandal spiraled into chaos this week, a frantic hail of notes from Wall Street analysts reached investor inboxes with a clear and definitive directive: Buy.
… Analysts told investors to buy the dip. Advertisers kept spending. Legislators continued to sit on their hands while a basic ad transparency bill rotted in Congress. And though users posted #DeleteFacebook en masse, Facebook actually rose to 8th place from 12th in the iOS mobile App Store since the day before the Cambridge Analytica news broke. It’s holding steady on Android, too.
… Amid a weeklong reaming from legislators, press, and public, Facebook relied on a playbook that’s worked effectively during its past crises, be it in failing to rein in fake news, or subversive foreign activity, or discriminatory ad targeting: The company first downplays the problem, then hunkers down as outrage builds. When it can no longer ignore the outrage, it speaks up and apologizes, rolls out some fixes, and returns to normal. Facebook has had practice at this sort of thing, and it’s gotten good at it.

Something for mu Architecture students to ponder.
An announcement on January 24 didn’t get the large amount of attention it deserved: Apple and 13 prominent health systems, including prestigious centers like Johns Hopkins and the University of Pennsylvania, disclosed an agreement that would allow Apple to download onto its various devices the electronic health data of those systems’ patients — with patients’ permission, of course.
It could herald truly disruptive change in the U.S. health care system. The reason: It could liberate health care data for game-changing new uses, including empowering patients as never before.
… Frustration has increased interest in a very different approach to data sharing: Give patients their data, and let them control its destiny. Let them share it with whomever they wish in the course of their own health care journey.
Several technology companies — including Google and Microsoft — tried this in the early 2000s, but their efforts failed.
… A world in which patients have ready access to their own electronic data with the help of facilitators like Apple creates almost unfathomable opportunities to improve health care and health. First, participating patients would no longer be dependent on the bureaucracies of big health systems or on understaffed physician offices to make their own data available for further care. This could improve the quality of services and reduce cost through avoiding duplicative and unnecessary testing.
Second, the liberation of patients’ data makes it possible for consumer-oriented third parties to use that data (with patients’ permission) to provide new and useful services that help patients manage their own health and make better health care choices. Such consumer-facing applications — if they are designed to be intuitive, useable, and accurate — have the potential to revolutionize patient-provider interactions and empower consumers in ways never before imagined in the history of medicine. Imagine Alexa- or Siri-style digital health advisors that can respond to consumer questions based on users’ unique health care data and informed by artificial intelligence. Health care could start to function much more like traditional economic markets.

Friday, March 23, 2018

Who attacks an entire city? Anyone who can write a phishing email!
Sean Gallagher reports:
The city of Atlanta government has apparently become the victim of a ransomware attack. The city’s official Twitter account announced that the city government “is currently experiencing outages on various customer facing applications, including some that customers may use to pay bills or access court-related information.”
According to a report from Atlanta NBC affiliate WXIA, a city employee sent the station a screen shot of a ransomware message demanding a payment of $6,800 to unlock each computer or $51,000 to provide all the keys for affected systems. Employees received emails from the city’s information technology department instructing them to unplug their computers if they noticed anything suspicious. An internal email shared with WXIA said that the internal systems affected include the city’s payroll application.
Read more on Ars Technica.

Maybe it was Uber’s fault.
Human Driver Could Have Avoided Fatal Uber Crash, Experts Say
… Forensic crash analysts who reviewed the video said a human driver could have responded more quickly to the situation, potentially saving the life of the victim, 49-year-old Elaine Herzberg. Other experts said Uber’s self-driving sensors should have detected the pedestrian as she walked a bicycle across the open road at 10 p.m., despite the dark conditions.
… Zachary Moore, a senior forensic engineer at Wexco International Corp. who has reconstructed vehicle accidents and other incidents for more than a decade, analyzed the video footage and concluded that a typical driver on a dry asphalt road would have perceived, reacted, and activated their brakes in time to stop about eight feet short of Herzberg.
Other experts questioned the technology. The Uber SUV’s "lidar and radar absolutely should have detected her and classified her as something other than a stationary object," Bryant Walker Smith, a University of South Carolina law professor who studies self-driving cars, wrote in an email.
Smith said the video doesn’t fully explain the incident but "strongly suggests a failure by Uber’s automated driving system and a lack of due care by Uber’s driver (as well as by the victim)."

Certainly not tools my students should be using.
You Can DDoS an Organization for Just $10 per Hour: Cybercrime Report
According to Armor’s The Black Market Report: A Look into the Dark Web (PDF), anyone can DDoS an organization for only $10 an hour or $200 per day. Remote Desktop Protocol (RDP) access for a system for three months costs only $35.

Tools for Privacy?
Vivaldi browser now uses DuckDuckGo as default search engine in private windows
Vivaldi, the desktop browser app that was launched in 2016 by Opera cofounder Jon von Tetzchner, has introduced a small but interesting new feature today.
As most other browsers do, Vivaldi offers a private browsing mode that offers a degree of privacy insofar as it doesn’t record the sites you visit or store cookies and temporary files. However, moving forward, Vivaldi will also make privacy-focused DuckDuckGo the default search engine within private browsing windows, irrespective of what your default search engine is in the normal browsing mode.
Founded in 2008, DuckDuckGo is pitched as the antithesis of Google, insofar as it doesn’t profile or track its users around the web. It also promises to serve the same results to all users.

Maybe there is a use for lawyers after all…
Kaleigh Rogers reports:
Nobody actually reads through the privacy policies of every website, which is why researchers recently used artificial intelligence to create a tool that reads them for you and flags anything you might not be psyched to agree to.
Launched earlier this year as a part of the Usable Privacy Project, the tool uses artificial intelligence to crawl through 7,000 of the web’s most popular sites, including Facebook, Reddit, and Twitter, and parse their privacy policies. That data is available on the project’s website, where you can search for a site and see a breakdown of some of the most pivotal information included in that site’s privacy policy, including whether the company that owns the site is collecting data on its users, and whether it’s sharing that data with any third parties.
Read more on Motherboard.

Interesting. No reason needed? Could they open my lawyer’s phone?
Yes, Cops Are Now Opening iPhones With Dead People's Fingerprints
… it was now relatively common for fingerprints of the deceased to be depressed on the scanner of Apple iPhones, devices which have been wrapped up in increasingly powerful encryption over recent years. For instance, the technique has been used in overdose cases, said one source. In such instances, the victim's phone could contain information leading directly to the dealer.
No privacy for the dead
And it's entirely legal for police to use the technique, even if there might be some ethical quandaries to consider. Marina Medvin, owner of Medvin Law, said that once a person is deceased, they no longer have a privacy interest in their dead body. That means they no longer have standing in court to assert privacy rights.
… "We do not need a search warrant to get into a victim's phone, unless it's shared owned," said Ohio police homicide detective Robert Cutshall
… Police are now looking at how they might use Apple's Face ID facial recognition technology, introduced on the iPhone X. And it could provide an easier path into iPhones than Touch ID.
… Whilst Face ID is supposed to use your attention in combination with natural eye movement, so fake or non-moving eyes can't unlock devices, Rogers found that the tech can be fooled simply using photos of open eyes. That was something also verified by Vietnamese researchers when they claimed to have bypassed Face ID with specially-created masks in November 2017, said Rogers.

Joe Cadillic writes:
A company called Dataworks Plus has developed a portable facial and fingerprint biometric scanner for law enforcement.
The ‘Evolution’ is a portable facial and fingerprint smartphone that police can use to identify everyone.
“It is multi-modal and can capture fingerprint and facial images and is compatible with our RAPID-ID fingeprint recognition and FACE Plus facial recognition applications.”
Dataworks claims police can identify anyone “regardless of factors such as hair color, glasses, and image background”.
Read more on MassPrivateI.

Why did no one care until it helped elect Trump?
Another day another revelation about Facebook giving researcher data on 57B users
The Guardian – “Before Facebook suspended Aleksandr Kogan from its platform for the data harvesting “scam” at the centre of the unfolding Cambridge Analytica scandal, the social media company enjoyed a close enough relationship with the researcher that it provided him with an anonymised, aggregate dataset of 57bn Facebook friendships. Facebook provided the dataset of “every friendship formed in 2011 in every country in the world at the national aggregate level” to Kogan’s University of Cambridge laboratory for a study on international friendships published in Personality and Individual Differences in 2015. Two Facebook employees were named as co-authors of the study, alongside researchers from Cambridge, Harvard and the University of California, Berkeley. Kogan was publishing under the name Aleksandr Spectre at the time. A University of Cambridge press release on the study’s publication noted that the paper was “the first output of ongoing research collaborations between Spectre’s lab in Cambridge and Facebook”. Facebook did not respond to queries about whether any other collaborations occurred. “The sheer volume of the 57bn friend pairs implies a pre-existing relationship,” said Jonathan Albright, research director at the Tow Center for Digital Journalism at Columbia University. “It’s not common for Facebook to share that kind of data. It suggests a trusted partnership between Aleksandr Kogan/Spectre and Facebook.” Facebook downplayed the significance of the dataset, which it said was shared with Kogan in 2013. “The data that was shared was literally numbers – numbers of how many friendships were made between pairs of countries – ie x number of friendships made between the US and UK,” Facebook spokeswoman Christine Chen said by email. “There was no personally identifiable information included in this data.”

(Related) Of course that’s only in Canada. It could never happen here.
The Canadian Press reports:
The fact that political parties are excluded from federal laws on handling personal information — such as social media data — amounts to “an important gap” that could jeopardize the integrity of the electoral process, Canada’s privacy czar says.
There should be a law governing the use of personal data by parties to prevent manipulation of the information to influence an election, privacy commissioner Daniel Therrien said Thursday in an interview.
Read more on Todayville.

Something for my Data Architecture students.
Health care teams depend on electronic health records (EHRs) to compile important medical data from innumerable lab tests and medical devices, observations, treatments, and diagnostic codes. We rely on it so much that we consider the EHR to be a team member.
But in fast-paced critical care units, where even small errors can have big consequences, this digital team member can overload physicians with information. The sheer volume of data in EHRs creates a staggering challenge in complex environments such as intensive care units (ICUs) and emergency medicine departments. Individual clinicians may have to sift through more than 50,000 data points to find key information. This proliferation of data (both meaningful and meaningless) and the workload created by EHR systems have been key drivers of clinician burnout and, paradoxically, introduced new threats to patient safety. What is more, relying only on EHR data greatly limits the insights derived from artificial intelligence algorithms or big data analytics.
Mayo Clinic, the nation’s second-largest critical-care provider in the United States, with nearly 350 beds in 15 intensive care units (ICUs) across its campuses in Minnesota, Arizona, and Florida, decided to combat the data deluge with ambient intelligence: a set of decision-making tools powered by data on and insights into clinicians’ goals, work environments, strengths, and performance constraints. When layered on top of existing information infrastructure, ambient-intelligence applications can cut through the clutter and deliver the right information in a digestible form that clinicians can use, quickly and effectively at the patient’s bedside.

Did Congress toss the baby with the bathwater?
Craigslist axes personal ads after sex trafficking bill passes
The popular online classified ads site Craigslist has stopped publishing personal ads after the Senate approved a controversial sex trafficking bill that makes website operators more accountable for their users' activities.
Craigslist's personal ads have for decades been a popular way for people to make romantic connections, but with the Senate's approval Wednesday of the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA), Craigslist said it couldn't afford risking its operations by running personal ads.
… The legislation -- approved by both branches of Congress – amends Section 230 of the Communications Decency Act from 1996, which many online platforms saw as a vital protection from liability for content posted by their users. The legislation makes it a crime to operate an internet platform with the intent of promoting prostitution.
Supporters say the legislation will help curb the growing epidemic of online sex trafficking that often involves children, while opponents argue it could expose tech companies to costly lawsuits and infringe on free speech.

For all my student researchers.
Using your phone to find and scan scholarly articles
Google Scholar Blog Quickly flip through papers on your phone. “Today, we are making it easier to use your phone to find and scan scholarly articles. Clicking a Scholar search result on your phone now opens a quick preview. You can swipe left and right to quickly flip through the list of results. Where available, you can read abstracts. Or explore related and citing articles, which appear at the bottom of the preview along with other familiar Scholar features…”

Thursday, March 22, 2018

An all too common security failure.
Today’s episode of Incident Response Fail involves a cybersecurity professional/bug bounty hunter, Mohamed Suwaiz, and a driver training company in Texas, Smith System, that seemed to stubbornly resist his efforts to alert them to a data leak.
Although Suwaiz (@Msuwaiz on Twitter) describes himself as being motivated by bug bounties, when there’s no bounty to be had, he just gives information that he finds to companies to help them secure their data.
A few days after we first met online while I was investigating the Leon County Schools case, Suwaiz reached out to me to tell me that he needed to talk to me.
“@drive_different is having huge data leak,” he told me. He had already tried unsuccessfully to contact them via emails, Facebook, Twitter, and by contacting an intermediary to help him call the CTO, he explained. Calling from his part of the world is not easy, he said, so he had enlisted the help of someone who might help him get through.
So far, all of his attempts had failed to produce any results.
[Details follow… Bob]

What is interesting is why they didn’t do this years ago. Should make for some interesting discussions with my students.
Read Mark Zuckerberg's Full Statement on Facebook's Data Scandal

(Related) If my students haven’t been doing this, I’ll make it an assignment.
Tools to understand and monitor the collection of your data by Facebook and Twitter
Fast Co. Design: Creative technologists are developing their own tools for investigating, nudging, and altering the world’s largest social network. “..To understand the kind of information the platform may have on you, and how it may use it, turn to Data Selfie, a project developed by the artists Hang Do Thi Duc and Regina Flores Mir last year with funding from the New York City Economic Development Corporation, the Mayor’s Office of Media and Entertainment, and the NYC Media Lab. The Chrome extension generates a “selfie,” or profile, of your Facebook activity and uses machine learning to analyze that behavior in a way similar to Facebook itself. Are your likes more liberal leaning? What does your behavior imply about your psychological profile? Data Selfie–which doesn’t actually record any data from you–offers a glimpse into the kind of behavioral profiling that’s come to light through new revelations about Cambridge Analytica and the leak of data of 50 million Facebook users. Check it out here
  • J. Nathan Matias, who founded the citizen behavioral science platform CivilServant at MIT and is now a postdoc at Princeton University, has blogged about his so-called “audits” over the past year on Medium–for instance, running his own experiments on how Facebook promotes images versus texts with colored backgrounds and an earlier experiment on the Pride reaction button. “How much can a single person learn about Facebook with a little patience and a spreadsheet?” he writes. “More than you might expect!” Matias’s posts include instructions on how to run your own Facebook audit, and he even offers to help you do the statistics or coding if you want to run your own test. “I have often argued that we need independent testing of social tech, especially when a company’s promises are great or the risks are substantial,” he writes. “Sometimes when I suggest this, academics respond that independent evaluations require long, complex work by experts. That’s not always the case.” Learn more here.
  • Ben Grosser, an artist and professor at University of Illinois at Urbana-Champaign’s School of Art & Design, has written about how these ubiquitous user interface elements deeply influence user behavior. He has also built several Chrome extensions that throw Facebook’s carefully honed algorithms into chaos–like lobbing a digital smoke bomb on your News Feed…also he has just launched a version of the Demetricator for Twitter–a reminder that Facebook isn’t the only social network worthy of our critical thought as users. Check it out here…”

For my Software Architecture class.
5 Steps to a Painless Checkout Process

Perspective. Apparently, I have trouble digesting big numbers because I had to read this article several times before I understood exactly how much money we’re talking about. How can a company be worth $50 billion less than its assets?
Tencent’s 60,000% Runup Leads to One of the Biggest VC Payoffs Ever
South African media company Naspers Ltd. is cashing in a tiny sliver of one of the greatest venture-capital investments ever.
… Naspers might have remained an obscure publisher of South African newspapers and operator of pay-TV services if not for its decision in 2001 to invest $32 million in Tencent, a then little-known Chinese startup. The stake is now worth $175 billion and given that Naspers has a market value of about $125.5 billion, it means investors place no value on Naspers’ other operations and investments.
… The sale of 190 million shares, worth $10.6 billion based on Tencent’s closing price in Hong Kong on Thursday, will cut the stake held by Naspers to 31.2 percent from 33.2 percent.

For our Python students.

Wednesday, March 21, 2018

An unusually large post from DataBreaches, but that’s good for my students.
Protenus, Inc. has released its February Breach Barometer, with its analysis of 39 health data incidents compiled for them by this site. As I have done in companion posts to their previous reports, I am providing a list, below, of the incidents upon which their report is based. Where additional details are available, I have linked to them. In some cases, as in past months, the only information we have is what HHS has posted on their public breach tool (referred to by some as the “Wall of Shame”). Because HHS’s reporting form results in ambiguous reports, some incidents reported to HHS wind up being coded as “UNKNOWN” for breach vector in Protenus’s analyses. Similarly, HHS’s form does not seem to result in accurate estimates of the role of third parties or Business Associates, and Protenus’s report contains more reports involving third parties than HHS’s list would suggest or indicate.
Unlike previous months’ reports, though, you will see four “nonpublic” incidents in this month’s tally. I will be discussing those four incidents later in this post, but let’s start with a few of the highlights from Protenus’s report for February:
  • 39 incidents, with details for 28 of them;
  • 348,889 records for the 28 incidents for which we had numbers;
  • 16 Insider incidents, accounting for 177,247 records: 15 out of 16 were insider-error, and 1 was insider-wrongdoing;
  • 13 Hacking incidents, accounting for 160,381 records;
  • 11 Business Associate/Third Party incidents; and
  • 23 of the 39 incidents involved providers.
See their report for additional statistics and analyses, including their analyses of gap to discovery of breaches and gap to reporting/disclosing of breaches. Here is the list of the 39 incidents compiled for February:

Something my students will be discussing this Quarter. At last, a recommendation for a paper trail! But no way to match it to vote totals?
Senate Intel Committee gives Homeland Security its election security wish list
In a press conference today, the Senate Select Committee on Intelligence presented its urgent recommendations for protecting election systems as the U.S. moves toward midterm elections later this year.

Lots to chew on here. How much it will change Facebook or social media in general remains to be seen. Probably not much.
Facebook, Cambridge Analytica, the 2016 Election, and a colossal misappropriation of social media data
News about the media frenzy linking a whole lot of high profile news stories together – Facebook CEO Zuckerberg’s disappearing act, Cambridge Analytica’s ‘harvesting’ of 50 million FB users’ data [without permission – and directed by Steve Bannon] which helped explain the role that the company played when it was embedded with the Trump campaign in 2016]; the Mueller investigation, the Comey book, the McCabe firing, and the weather (happy Spring – enjoy Washington’s biggest snowstorm of the season) is yet to reach a crescendo, so hang in there. Along with the impact of the DC area snow storm on budget funding deadline, we are also waiting for Facbook’s official response to yet another ‘breach’ of trust and data, and more evidence about how the social media data of tens of millions of users was appropriated and used by a UK conglomerate that has some very troubling history with its involvement in elections in the US and UK and beyondand it use of self destructing email to cover its trail.
I posted over a dozen references and sources on this issue when it began to break, and I use the word ‘began’ cautiously. The massive, unmonitored [dubbed harvesting] collection of social media user data is far greater than users of various applications have been willing to address, or even attempt to mitigate against future harvesting efforts [if they have any capability of doing so in the first place – which remains unclear]. This premise stands completely separate from the concept of any regulatory function or layer that may exist between users and the companies, here and abroad, that acquire our data (often at no cost at all) and use it until such time that a whistleblower or two enter from stage left and lift the curtain on all the backend techie sausage making.

(Related) I shouldn’t have to tell my students, but it can’t hurt.
How To Change Your Facebook Settings To Opt Out of Platform API Sharing

Facebook has lost nearly $50 billion in market cap since the data scandal

Clearly we (NSA) have weapons. When can they be used and against what targets?
U.S. Military Should Step Up Cyber Ops: General
General John Hyten, who leads US Strategic Command (STRATCOM), told lawmakers the US has "not gone nearly far enough" in the cyber domain, also noting that the military still lacks clear rules of cyber engagement.
"We have to go much further in treating cyberspace as an operational domain," Hyten told the Senate Armed Services Committee.
"Cyberspace needs to be looked at as a warfighting domain, and if somebody threatens us in cyberspace we need to have the authorities to respond."
Hyten noted, however, that the US had made some progress in conducting cyber attacks on enemies in the Middle East, such as the Islamic State group.
His testimony comes weeks after General Curtis Scaparrotti, commander of NATO forces in Europe, warned that US government agencies are not coordinating efforts to counter the cyber threat from Russia, even as Moscow conducts a "campaign of destabilization."
And last month, Admiral Michael Rogers, who heads both the NSA – the leading US electronic eavesdropping agency – and the new US Cyber Command, said President Donald Trump had not yet ordered his spy chiefs to retaliate against Russian interference in US elections.

(Related) The terrorist organization and individual actors; how about funding sources and nations that provide shelter and training?
'Slingshot' Campaign Outed by Kaspersky is U.S. Operation Targeting Terrorists: Report
Earlier this month, Kaspersky published a report detailing the activities of a threat actor targeting entities in the Middle East and Africa — sometimes by hacking into their Mikrotik routers. The group is believed to have been active since at least 2012 and its members appear to speak English, the security firm said.
The main piece of malware used by the group has been dubbed Slingshot based on internal strings found by researchers. Kaspersky identified roughly 100 individuals and organizations targeted with the Slingshot malware, mainly in Kenya and Yemen, but also in Afghanistan, Libya, Congo, Jordan, Turkey, Iraq, Sudan, Somalia and Tanzania.
CyberScoop claims to have learned from unnamed current and former U.S. intelligence officials that Slingshot is actually an operation of the U.S. military’s Joint Special Operations Command (JSOC), a component of Special Operations Command (SOCOM), aimed at members of terrorist organizations such as ISIS and al-Qaeda. SOCOM is well known for its counterterrorism operations, which can sometimes include a cyber component.

Something to liven up those dull PowerPoint slides? Screaming, groaning, weeping students perhaps?
ZapSplat - Thousands of Free Sound Effects
ZapSplat is a website that offers more than 20,000 sound effects and songs that you can download and re-use for free. The licensing that ZapSplat uses is quite clear. As long as you cite ZapSplat, you can use the sound effects and music in your videos, podcasts, and other multimedia projects.
ZapSplat does require you to create an account in order to download the MP3 and WAV files that it hosts. Once you have created an account you can download as many files as you like. ZapSplat does offer a "Gold" account. The benefit of a Gold account is that you don't have to cite ZapSplat and access to an expanded library of sounds.

Tuesday, March 20, 2018

Not guilty? Should self-driving cars be required to “see in the dark?”
Exclusive: Tempe police chief says early probe shows no fault by Uber
Pushing a bicycle laden with plastic shopping bags, a woman abruptly walked from a center median into a lane of traffic and was struck by a self-driving Uber operating in autonomous mode.
… Traveling at 38 mph in a 35 mph zone on Sunday night, the Uber self-driving car made no attempt to brake, according to the Police Department’s preliminary investigation.
… The self-driving Volvo SUV was outfitted with at least two video cameras, one facing forward toward the street
… From viewing the videos, “it’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway,” Moir said. The police have not released the videos.
… “I suspect preliminarily it appears that the Uber would likely not be at fault in this accident, either,” Moir said.
However, if Uber is found responsible, that could open a legal quagmire.
“I won’t rule out the potential to file charges against the (backup driver) in the Uber vehicle,” Moir said.
But if the robot car itself were found at fault? “This is really new ground we’re venturing into,” she said.

(Related) Instant commentary.
Self-Driving Cars Still Don't Know How to See
Cars don’t see well
Autonomous cars don’t track the center line of the street well on ill-maintained roads. They can’t operate on streets where the line markings are worn away—as on many of the streets in New York City. These cars also don’t operate in snow and other bad weather because they can’t “see” in these conditions. A LIDAR guidance system doesn’t work well in the rain or snow or dust because its beams bounce off the particles in the air instead of bouncing off obstacles like bicyclists.

For my Computer Security students.
John Amabile and Micheal Binns of Parker Poe Adams & Bernstein write:
A change in emphasis in disputes over data security breaches is coming. To date, the focus has been on issues and potential damages arising from the breach itself and the subsequent loss of private, personal information. In light of recognized delays from both Equifax and Uber, combined with the confusing array of breach notification responsibilities, we believe 2018 will see a growing emphasis on disputes arising from a corporation’s delay in notifying the public, the affected individuals and regulatory bodies about the breach.
Read more on Daily Report.

Implications for my Software Architecture students? Sounds like they are rather low on the Maturity Model.
Why Process Is U.S. Health Care’s Biggest Problem
… It only takes 10 minutes of direct observation of a nurse in a hospital to understand care-delivery processes are not standardized and are dependent on individuals, not systems. This lack of reproducibility leads to errors. Since every caregiver does it his or her own way, it’s difficult to improve anything. Stable systems that are reproducible are required to deliver consistently high quality. Industrial companies figured this out 50 years ago. The writings of manufacturing gurus Imai and Shingo provide insight into how quality is built into processes. A process must first be stabilized then standardized before being improved. Because few standardized processes exist in care delivery there are many possibilities for error. That’s why simply making a poor process electronic by implementing an electronic health record (EHR) doesn’t lead to better quality or cost.
When it comes to change, the technology is the easiest part. Most health systems in America have or are implementing the EHR. And the vendor processes for implementation have become very good. The hard part is to get the doctors, nurses, and administrators to agree on what is the best way to deliver the care. Since the doctors control most care decisions, the rest of the provider team follows the doctors’ lead. If the doctor wants to do things a certain way, that’s what is done. The problem is the next doctor wants it his way and so on.

My students should think about what Gartner is saying.
Gartner issues four-part prescription for data and analytics leaders
“It's such a consequential time to be a data and analytics leader," said Rita Sallam, research vice president at Gartner and master of ceremonies of the recent Gartner Data & Analytics Summit 2018 event. Consequential because companies deemed info-savvy are valued at nearly twice the market average, she said, citing Gartner research. And consequential because data, as regarded by the information experts attending the event, is under attack.
"Just as fake news became a viable political weapon – and make no mistake, fake news is fake data, which makes it our problem – ensuring data quality, providing a foundation of trust, just became job No. 1 for everyone in this room," Sallam said.
… Sallam and her Gartner colleagues warned attendees they must overcome four "tough challenges" as they strive to help their companies capitalize on data. To succeed, they will have to:
  1. Establish trust in the data;
  2. Promote diversity – of people and skills, as well as types of data;
  3. Manage complexity through automation; and
  4. Develop data literacy programs.

How to learn what your citizens are talking about? Telegram Secret Chats are one-on-one chats wherein messages are encrypted with a key held only by the chat’s participants. Secret Chats is different from what is used for cloud chats
Telegram Must Give FSB Encryption Keys: Russian Court
Russia's Supreme Court on Tuesday ruled the popular Telegram messenger app must provide the country's security services with encryption keys to read users' messaging data, agencies reported.
Media watchdog Roskomnadzor instructed Telegram to "provide the FSB with the necessary information to decode electronic messages received, transmitted, or being sent" within 15 days, it said on its website.
Telegram had appealed against an earlier ruling that it must share this information, but this appeal was rejected on Tuesday.
If it does not provide the keys it could be blocked in Russia.
Durov wrote last year that the FSB's demands are "technically impossible to carry out" and violate the Russian Constitution which entitles citizens to privacy of correspondence.
Tuesday's ruling is the latest move in a dispute between Telegram and the Russian authorities as Moscow pushes to increase surveillance of internet activities.

Because we need more data?
IBM working on ‘world’s smallest computer’ to attach to just about everything
IBM is hard at work on the problem of ubiquitous computing, and its approach, understandably enough, is to make a computer small enough that you might mistake it for a grain of sand.
It’s an evolution of IBM’s “crypto anchor” program, which uses a variety of methods to create what amounts to high-tech watermarks for products that verify they’re, for example, from the factory the distributor claims they are, and not counterfeits mixed in with genuine items.
The “world’s smallest computer,” as IBM continually refers to it, is meant to bring blockchain capability into this; the security advantages of blockchain-based logistics and tracking could be brought to something as benign as a bottle of wine or box of cereal.
In addition to getting the computers extra-tiny, IBM intends to make them extra-cheap, perhaps 10 cents apiece.

More than half of US homes now subscribe to a streaming service, spending $2.1 billion a month
Deloitte found in its 12th annual digital media trends survey that the percentage of American households that subscribe to a streaming service has grown to 55 percent. Last year, the firm reported that 49 percent of households reported at least one video subscription service.
Kevin Westcott, vice chairman and U.S. media and entertainment leader at Deloitte, told CNBC that exclusive original content is a major driver for customers when they're choosing subscriptions. In its survey of 2,088 consumers, Deloitte said more than half of current streaming customers chose to subscribe to a service based on access to exclusive content.

Worth browsing?
MIT SMR Unlocked for All Visitors
In celebration of growth, please enjoy full access to the MIT SMR site on March 20 and 21.

Monday, March 19, 2018

The opposite of Artificial Intelligence is Normal Stupidity but why design it into a device? Something for my Software Architecture class.
People are accidentally setting off Apple’s Emergency SOS alert
If you sleep on your Apple Watch the wrong way, you might get a wake-up call from the police. That’s what happened to Jason Rowley, who tweeted about the incident earlier this week. Using his watch as a sleep tracker, he ended up holding down the crown button to trigger an emergency call to the police, who showed up in his bedroom at 1AM. Rowley told us the police were friendly and helpful, and accustomed to WatchOS misdials like this one.
If you scan through Twitter, you’ll find a surprising number of stories like Rowley’s. It’s a problem for iPhones too, since the same alert can be triggered through the side button. (One Verge staffer triggered an alert after mistaking the power button for the volume controls.) In each case, you’ll get a blaring countdown and have three to five seconds to turn it off before your device calls 911 and texts any emergency contacts you’ve set up.
… The exact sequence of buttons varies from device to device. A Watch will slip into an alert just from holding down the crown button long enough, which seems to be a particular danger if you wear it to sleep. If you’re running the latest iOS on an iPhone 7 or older, you trigger an SOS by tapping the side button five times (apparently a common practice for fidgeters), and more recent iPhones will start the countdown just from holding the button.
Of course, you can fix some of this by turning off Autocall in Settings > Emergency SOS, which will add an extra slider step. But it’s easy to see why you might not want to. Maybe a few accidental 911 calls isn’t so bad compared to the risk of an actual emergency?

It may be out there, so we have to search?
Sidney Fussell reports:
Google was served at least four sweeping search warrants by Raleigh, North Carolina police last year, requesting anonymized location data on all users within areas surrounding crime scenes. In one case, Raleigh police requested information on all Google accounts within 17 acres [??? Bob] of a murder, overlapping residences, and businesses. Google did not confirm or deny whether it handed over the requested data to police.
WRAL reporter Tyler Dukes found four investigations in 2017 where police issued these uniquely extensive warrants: two murder cases, one sexual battery case, and an arson case that destroyed two apartment complexes and displaced 41 people.
Read more on Gizmodo.
[From Gizmondo:
Instead of finding a suspect, and then searching that person’s data, police are searching enormous amounts of data to pinpoint a potential suspect.
… Police in each case were requesting account identifiers, an anonymized string of numbers unique to each device, and time-stamped location coordinates for every device. Police wanted to review this information, narrow down their list, [How? Bob] and then request user names, birth dates, and other identifying information regarding the phones’ owners. This information doesn’t reveal actual text messages or phone call logs. For that information, police would have to go through a separate warrant process.
Disturbingly, if Google has handed over data, it could be under court order not to notify individual users.

I don’t own a phone. Probably makes me a suspect.
Eva Fedderly reports:
A divided 11th Circuit on Thursday upheld the conviction of a Florida man stemming from a warrantless search of his cellphone, holding that such searches do not violate the Fourth Amendment.
The appellant in the case, Hernando Javier Vergara, was returning home to Tampa, Florida following a cruise to Cozumel, Mexico, when he was subjected to a search of luggage by a Customs and Border Protection officer.
Read more on Courthouse News.

Could this happen here?
Reuters reports:
China said it will begin applying its so-called social credit system to flights and trains and stop people who have committed misdeeds from taking such transport for up to a year.
Read more on Reuters.
And now do you wonder whether too many people are too quick to say they have nothing to hide?

For my Computer Security class.
Preventing Business Email Compromise Requires a Human Touch
Human-powered Intelligence Plays a Critical Role in Defending Against Socially Engineered Attacks
The FBI’s Internet Crime Complaint Center (IC3) declared Business Email Compromise (BEC) the “3.1 billion dollar scam” in 2016, an amount which then grew in the span of one year into a “5 billion dollar scam.” Trend Micro now projects those losses in excess of 9 billion dollars.
It’s an understatement to say BEC scams and the resulting damages are on the rise. But with cybersecurity spending across all sectors at an all-time high, how is such an unsophisticated threat still costing otherwise well-secured organizations billions of dollars?
Unlike the numerous types of attacks that incorporate malware, most BEC scams rely solely on social engineering. In fact, its use of trickery, deception, and psychological manipulation rather than malware is largely why BEC continually inflicts such substantial damages. Since most network defense solutions are designed to detect emails containing malware and malicious links, BEC emails often land directly in users’ inboxes. And when this happens, the fate of an attempted BEC scam is in the hands of its recipient.

If it can be done, should my Ethical hackers give it a try? The article gives some tips on how it works…
GrayKey iPhone unlocker poses serious security concerns
… In late 2017, word of a new iPhone unlocker device started to circulate: a device called GrayKey, made by a company named Grayshift. Based in Atlanta, Georgia, Grayshift was founded in 2016, and is a privately-held company with fewer than 50 employees. Little was known publicly about this device—or even whether it was a device or a service—until recently, as the GrayKey website is protected by a portal that screens for law enforcement affiliation.
According to Forbes, the GrayKey iPhone unlocker device is marketed for in-house use at law enforcement offices or labs. This is drastically different from Cellebrite’s overall business model, in that it puts complete control of the process in the hands of law enforcement.
Thanks to an anonymous source, we now know what this mysterious device looks like, and how it works. And while the technology is a good thing for law enforcement, it presents some significant security risks.

Social Media as a targeting tool.
US spy lab hopes to geotag every outdoor photo on social media
Imagine if someone could scan every image on Facebook, Twitter, and Instagram, then instantly determine where each was taken. The ability to combine this location data with information about who appears in those photos—and any social media contacts tied to them—would make it possible for government agencies to quickly track terrorist groups posting propaganda photos. (And, really, just about anyone else.)
That's precisely the goal of Finder, a research program of the Intelligence Advanced Research Projects Agency (IARPA), the Office of the Director of National Intelligence's dedicated research organization.
For many photos taken with smartphones (and with some consumer cameras), geolocation information is saved with the image by default. The location is stored in the Exif (Exchangable Image File Format) data of the photo itself unless geolocation services are turned off. If you have used Apple's iCloud photo store or Google Photos, you've probably created a rich map of your pattern of life through geotagged metadata. However, this location data is pruned off for privacy reasons when images are uploaded to some social media services, and privacy-conscious photographers (particularly those concerned about potential drone strikes) will purposely disable geotagging on their devices and social media accounts.
… The Finder program seeks to fill in the gaps in photo and video geolocation by developing technologies that build on analysts' own geolocation skills, taking in images from diverse, publicly available sources to identify elements of terrain or the visible skyline. In addition to photos, the system will pull its imagery from sources such as commercial satellite and orthogonal imagery. The goal of the program's contractors—Applied Research Associates, BAE Systems, Leidos (the company formerly known as Science Applications Incorporated), and Object Video—is a system that can identify the location of photos or video "in any outdoor terrestrial location."

What Do Saudi Arabia, Iraq, UAE, Egypt, Kazakhstan, Turkmenistan, Nigeria, Burma And Bangladesh Have In Common?
They’ve all bought military UAVs from China. I didn’t realize China had advanced so far in military exports.

Looks like a dogpile on Facebook.
Facebook may have violated FTC privacy deal, say former federal officials, triggering risk of massive fines

Probably all social media will have to have a generalized version of this soon. Easy to see how that capability could be misused.
France’s new cyberhate law will require Facebook and Twitter to remove racist content within 24 hours
As part of an ongoing effort to fight rising racism and anti-Semitism, the French government announced today that it will introduce new legislation requiring digital platforms to more swiftly remove offensive content.
In announcing details of the proposed law after months of review, French prime minister Edouard Philippe said France will move to adopt the cyberhate law immediately while also pressing the European Union to adopt a version of the same measures for all members. While only some of the details were revealed, the French proposal mirrors a German law that went into effect this years and threatens fines of up to €50 million ($62 million) if a social network does not take down content identified as hate speech within 24 hours.

Voice Chat App Zello Turned a Blind Eye to Jihadis for Years
Despite warnings and flagged accounts, Zello left accounts with ISIS flag avatars and jihadist descriptions live on its service.

One Way Facebook Can Stop the Next Cambridge Analytica
In a 2013 paper, psychologist Michal Kosinski and collaborators from University of Cambridge in the United Kingdom warned that “the predictability of individual attributes from digital records of behavior may have considerable negative implications,” posing a threat to “well-being, freedom, or even life.” This warning followed their striking findings about how accurately the personal attributes of a person (from political leanings to intelligence to sexual orientation) could be inferred from nothing but their Facebook likes. Kosinski and his colleagues had access to this information through the voluntary participation of the Facebook users by offering them the results of a personality quiz, a method that can drive viral engagement. Of course, one person’s warning may be another’s inspiration.
Kosinski’s original research really was an important scientific finding. The paper has been cited more than 1,000 times and the dataset has spawned many other studies. But the potential uses for it go far beyond academic research. In the past few days, the Guardian and the New York Times have published a number of new stories about Cambridge Analytica, the data mining and analytics firm best known for aiding President Trump’s campaign and the pro-Brexit campaign. This trove of reporting shows how Cambridge Analytica allegedly relied on the psychologist Aleksandr Kogan (who also goes by Aleksandr Spectre), a colleague of the original researchers at Cambridge, to gain access to profiles of around 50 million Facebook users.

Suppose Amazon wants to buy in…
Google plans to boost Amazon competitors in search
Google may be assembling a supergroup of big retail brands to go to war with Amazon over the future of online shopping. Reuters is reporting that the search engine is teaming up with Target, Walmart, Home Depot, Costco and Ulta for the new project. These companies, and any other willing participants, can index their catalogs on Google, which will show up when someone starts searching for stuff to buy. Naturally, rather than receiving an ad fee, Google simply gets a cut of the sales that are subsequently generated.
The report claims that Google is selling its new anti-Amazon tools on the basis that it is utterly dominant in the search world.

Paper – Law, Metaphor, and the Encrypted Machine
Gill, Lex, Law, Metaphor, and the Encrypted Machine (March 12, 2018). Osgoode Legal Studies Research Paper No. 72, Volume 13, Issue 16, 2018. Available at SSRN:
“The metaphors we use to imagine, describe and regulate new technologies have profound legal implications. This paper offers a critical examination of the metaphors we choose to describe encryption technology in particular, and aims to uncover some of the normative and legal implications of those choices. Part I provides a basic description of encryption as a mathematical and technical process. At the heart of this paper is a question about what encryption is to the law. It is therefore fundamental that readers have a shared understanding of the basic scientific concepts at stake. This technical description will then serve to illustrate the host of legal and political problems arising from encryption technology, the most important of which are addressed in Part II. That section also provides a brief history of various legislative and judicial responses to the encryption “problem,” mapping out some of the major challenges still faced by jurists, policymakers and activists. While this paper draws largely upon common law sources from the United States and Canada, metaphor provides a core form of cognitive scaffolding across legal traditions. Part III explores the relationship between metaphor and the law, demonstrating the ways in which it may shape, distort or transform the structure of legal reasoning. Part IV demonstrates that the function served by legal metaphor is particularly determinative wherever the law seeks to integrate novel technologies into old legal frameworks. Strong, ubiquitous commercial encryption has created a range of legal problems for which the appropriate metaphors remain unfixed. Part V establishes a loose framework for thinking about how encryption has been described by courts and lawmakers — and how it could be. What does it mean to describe the encrypted machine as a locked container or building? As a combination safe? As a form of speech? As an untranslatable library or an unsolvable puzzle? What is captured by each of these cognitive models, and what is lost? This section explores both the technological accuracy and the legal implications of each choice. Finally, the paper offers a few concluding thoughts about the utility and risk of metaphor in the law, reaffirming the need for a critical, transparent and lucid appreciation of language and the power it wields.”

For the toolkit.
Twitter for Business: Everything You Need to Know

Another tool for the toolkit. Knowing it can be done is half the battle.
Easy Screen OCR is a solid program for grabbing the text from any image on your PC. Head to its homepage and download it, opting for the portable version if you like.

Just in time for my Software Architecture class!
Ongoing series of nonverbal algorithm assembly instructions based on IKEA methodology
IDEA is a series of nonverbal algorithm assembly instructions by Sándor P. Fekete, Sebastian Morr, and Sebastian Stiller. They were originally created for Sándor’s algorithms and datastructures lecture at TU Braunschweig, but we hope they will be useful in all sorts of context. We publish them here so that they can be used by teachers, students, and curious people alike. Visit the about page to learn more.”

Something to mention to my students. (Yes, that includes textbooks!)
Preaching to the choir – Why Reading Books Should be Your Priority, According to Science
Inc., Christina DesMarais: “More than a quarter–26 percent–of American adults admit to not having read even part of a book within the last year. That’s according to statistics coming out of the Pew Research Center. If you’re part of this group, know that science supports the idea that reading is good for you on several levels.
  • Reading fiction can help you be more open-minded and creative.
  • People who read books live longer. [Good to know!!]
  • Reading 50 books a year is something you can actually accomplish.
  • Successful people are readers….”

Dilbert on the future technology of crime fighting?