Saturday, February 24, 2018

I normally skip stories about laptop theft, there are just too many of them.
KHOU reports:
Information about City of Houston employees’ health insurance may have been compromised after an employee’s laptop computer was stolen.
City officials say the laptop was stolen from the employee’s car on Feb. 2. They say the password-protected computer may have contained city employees’ records, including names, addresses, dates of birth, Social Security numbers and other medical information.
Read more on KHOU.
[From the article:
City officials say human resource professionals are trained not to remove laptops from City offices unless sensitive data is encrypted. They say one employee “failed to follow his training.”




Not the most reassuring headline in the age of Russian election hacking.
The Myth of the Hacker-Proof Voting Machine




Defining the field of play?
Patience Wait reports:
In December, the U.S. Federal Trade Commission hosted a workshop on student privacy and edtech in Washington, D.C. During one panel, Priscilla Regan, a professor at George Mason University — who has been writing about privacy policy since the late 1970s — set the framework for discussion by identifying six broad concerns that together comprise the facets of the student privacy discussion…… The big six, according to Regan:
  • Organizational information privacy concerns
  • Anonymity
  • Surveillance and tracking
  • Autonomy
  • Bias, discrimination and due process
  • Data ownership
Read more about these concerns on EdScoop.




I think of this as automating the paper list police officers used to carry in their cars. Back then, one officer drove and the other scanned for suspicious activity and checked license plates, right?
The Kentucky Supreme Court declared last week that police need not bother applying for a warrant before tracking motorists with automated license plate readers (ALPR, also known as ANPR in Europe). The justices took up the issue in the case of Gregory Traft, who was stopped in Boone County on September 11, 2012, because his license plate triggered an alert from the patrol car’s automated camera system.
Traft had a warrant out for his arrest for failing to appear in court on the charge that he wrote a bad check. Deputy Sheriff Adam Schepis ordered Traft to pull over. In the course of the stop, Traft appeared to be quite drunk and was placed under arrest.
The high court’s only interest in the case was whether the deputy’s use of the license plate camera was lawful.
Read more on TheNewspaper.com




I am trying to convince my students that this will happen much faster than they think.
California could see self-driving cars with ‘remote drivers’ in April
Self-driving cars that back up their computerized system with a remote human operator instead of a fallback driver at the wheel could be tested on California roads as early as April, the state department of motor vehicles said.




Will this happen to the US as well?
How New Technologies Will Radically Reshape India’s Workforce
… skill development and employability remain a key challenge. At present, only 18% of the country’s workforce is formally skilled.
… even in the IT services sector, 55% to 65% of existing jobs are likely to go away because of AI.




An article I will share with my students next time I teach Excel.


Friday, February 23, 2018

A local incident.
SamSam ransomware infects Colorado Department of Transportation
SamSam ransomware is back and the Colorado Department of Transportation is its most recent victim. More than 2,000 agency computers had to be shut down on Feb 21 to prevent the ransomware from spreading across the entire infrastructure.
According to CBS local news, the critical systems used to manage road traffic and alerts were not affected. The attackers encrypted some files and requested bitcoin in exchange for the decryption key.




A video comment worth watching.
Weekly Update 75
03:52 - Australia's Notifiable Data Breach Scheme




Good question?
Ars Farivar reports:
Last November, a 74-year-old rancher and attorney was walking around his ranch just south of Encinal, Texas, when he happened upon a small portable camera strapped approximately eight feet high onto a mesquite tree near his son’s home. The camera was encased in green plastic and had a transmitting antenna.
Not knowing what it was or how it got there, Ricardo Palacios removed it.
Soon after, Palacios received phone calls from Customs and Border Protection officials and the Texas Rangers. Each agency claimed the camera as its own and demanded that it be returned. Palacios refused, and they threatened him with arrest.
Read more on Ars Technica.
Can the government just come onto your private property without your knowledge or consent and install surveillance equipment to surveill others? And if they can, is the notion of “private property” all but dead?




...and another good question.
Why Can Everyone Spot Fake News But The Tech Companies?
… Among those who pay close attention to big technology platforms and misinformation, the frustration over the platforms’ repeated failures to do something that any remotely savvy news consumer can do with minimal effort is palpable: Despite countless articles, emails with links to violating content, and viral tweets, nothing changes. The tactics of YouTube shock jocks and Facebook conspiracy theorists hardly differ from those of their analog predecessors; crisis actor posts and videos have, for example, been a staple of peddled misinformation for years.
This isn't some new phenomenon. Still, the platforms are proving themselves incompetent when it comes to addressing them — over and over and over again. In many cases, they appear to be surprised by that such content sits on their websites. And even their public relations responses seem to suggest they've been caught off guard with no plan in place for messaging when they slip up.




A little encouragement for my student entrepreneurs.
Snap chief earns $638 million in 2017, third-highest CEO payout ever
Snap Inc (SNAP.N) Chief Executive Evan Spiegel received $637.8 million as total compensation last year after the company went public, the third-highest annual payout ever received by a company’s CEO.




I wasn’t sure how “Inclusion” and AI were related. Looks like I learned something new.
New Website Draws on International Perspectives to Highlight Issues related to Inclusion and Artificial Intelligence
“The Berkman Klein Center for Internet & Society is pleased to share a newly-published interactive webpage, www.aiandinclusion.org, which highlights salient topics and offers a broad range of resources related to issues of AI and inclusion. The materials contribute to the Diversity and Inclusion track of the broader Ethics and Governance of Artificial Intelligence Initiative. Launched in Spring 2017, the initiative is anchored by the Berkman Klein Center and the MIT Media Lab, who have been working in conjunction over the past year to conduct evidence-based research, bolster AI for the social good, and construct a collective knowledge base on the ethics and governance of AI. The site reflects lessons learned from a wide-ranging international effort, and includes a number of resources produced from the Global Symposium on AI and Inclusion, which convened 170 participants from over 40 countries in Rio de Janeiro last November on behalf of the Global Network of Centers to discuss the impact of AI and related technologies on marginalized populations and the risks of amplifying digital inequalities across the world. Some of the primary resources available on the webpage include foundational materials that address overarching themes, key research questions, the initial framing of a research roadmap, and an overview of some of the most relevant opportunities and challenges identified pertaining to AI, inclusion, and governance. The research, findings, and ideas presented throughout the page both illuminate lessons learned from the past year, and lay the groundwork for the initiative’s continued work on issues of inclusion, acknowledging that the resources found here are only a starting point for this important conversation…”




A free and simple tool.




It can’t hurt to have some tools for this.
Common Craft Explains Flipped Classrooms
The flipped classroom concept, in the right setting, can be an effective way to maximize classroom time. Perhaps you've tried it yourself and have been looking for a way to explain it to parents or colleagues. Common Craft recently released a good video that could help you do just that.
Flipped Classroom Explained by Common Craft teaches the fundamental ideas behind the flipped classroom model. Thankfully, the video also addresses why the flipped classroom model is not appropriate for all students.
TESTeach (formerly known as Blendspace) makes it easy for teachers to organize and share educational materials in a visually pleasing format.
EDPuzzle is a popular tool for adding your voice and text questions to educational videos.
MoocNote is a free tool for adding timestamped comments, questions, and links to videos.


Thursday, February 22, 2018

This should bother my Computer Security students. Since when is a 10% failure rate considered good?
Meghan Bogardus Cortez reports:
University end users are pretty good at identifying a scam.
Only 10 percent of simulated phishing emails sent to users at education institutions were successful, a new study from Wombat Security Technologies reports. The company monitored tens of millions of simulated phishing attacks sent over the course of a year through its Security Education Platform across more than 15 industries.
The State of the Phish 2018 report found that users in education were less likely to click on a phishing attempt than those in technology, entertainment, hospitality, government, consumer goods, retail and telecommunications.
Read more on EdTech Magazine.




We’ve been considering how to prevent Russia from hacking these devices instead of merely chatting on social media.
The Risks of Digital Democracy
Like many segments of the economy and society, democracy is in the process of being digitized, a development that promises new levels of efficiency but also brings new risks. Consider the digitization of voting machines, devices that date back to the 19th century. The growing use of direct recording electronic (DRE) voting machines has made possible fully digitized voting and the availability of near real-time results.
But, the events of this summer’s 25th annual DEF CON computer security conference illustrate the risks that come with these benefits. As part of the conference, software engineers were invited to a Voting Machine Hacking Village to try to break in to commercially available DRE voting machines. The hackers cracked the “secured” systems in less than two hours.




Something the CSO can use to start a discussion with Senior Management? This has come up in several recent breaches.
SEC Tells Execs Not to Trade While Investigating Security Incidents
The U.S. Securities and Exchange Commission (SEC) on Wednesday announced updated guidance on how public companies should handle the investigation and disclosure of data breaches and other cybersecurity incidents.
The SEC has advised companies to inform investors in a timely fashion of all cybersecurity incidents and risks – even if the firm has not actually been targeted in a malicious attack. The agency also believes companies should develop controls and procedures for assessing the impact of incidents and risks.
While directors, officers and the people in charge of developing these controls and procedures should be made aware of security risks and incidents, the SEC believes these individuals should refrain from trading securities while in possession of non-public information regarding a significant cybersecurity incident.




Similar to the conclusions my students have reached.
Global Cybercrime Costs $600 Billion Annually: Study
A report by the security firm McAfee with the Center for Strategic and International Studies found theft of intellectual property represents about one-fourth of the cost of cybercrime in 2017, and that other attacks such as those involving ransomware are growing at a fast pace.
Russia, North Korea and Iran are the main sources of hackers targeting financial institutions, while China is the most active in cyber espionage, the report found.
Criminals are using cutting-edge technologies including artificial intelligence and encryption for attacks in cyberspace, with anonymity preserved by using bitcoin or other cryptocurrency, the researchers said.
The report said there is often a connection between governments and the cybercrime community.




A simple password testing tool.
I've Just Launched "Pwned Passwords" V2 With Half a Billion Passwords for Download
Last August, I launched a little feature within Have I Been Pwned (HIBP) I called Pwned Passwords. This was a list of 320 million passwords from a range of different data breaches which organisations could use to better protect their own systems. How? NIST explains:
When processing requests to establish and change memorized secrets, verifiers SHALL compare the prospective secrets against a list that contains values known to be commonly-used, expected, or compromised.
They then go on to recommend that passwords "obtained from previous breach corpuses" should be disallowed and that the service should "advise the subscriber that they need to select a different secret".
[The comparison tool: https://haveibeenpwned.com/Passwords




For my researching students.
Paper – Text mining 101
EU OpenMinted Project Paper – What is text mining, how does it work and why is it useful? “This article will help you understand the basics in just a few minutes. Text mining seeks to extract useful and important information from heterogeneous document formats, such as web pages, emails, social media posts, journal articles, etc. This is often done through identifying patterns within texts, such as trends in words usage, syntactic structure, etc. People often talk about ‘text and data mining (TDM)’ at the same time, but strictly speaking text mining is a specific form of data mining that deals with text…”




Is the sky really falling?
Top Experts Warn Against 'Malicious Use' of AI
Artificial intelligence could be deployed by dictators, criminals and terrorists to manipulate elections and use drones in terrorist attacks, more than two dozen experts said Wednesday as they sounded the alarm over misuse of the technology.
In a 100-page analysis, they outlined a rapid growth in cybercrime and the use of "bots" to interfere with news gathering and penetrate social media among a host of plausible scenarios in the next five to 10 years.
"Our report focuses on ways in which people could do deliberate harm with AI," said Seán Ó hÉigeartaigh, Executive Director of the Cambridge Centre for the Study of Existential Risk.
Contributors to the new report – entitled "The Malicious Use of AI: Forecasting, Prevention, and Mitigation" -- also include experts from the Electronic Frontier Foundation, the Center for a New American Security, and OpenAI, a leading non-profit research company.




I’d say yes, but the cost might be prohibitive.
Can “Fake News” be stopped?
On Wednesday, YouTube was forced to apologize for a video that sat at the top of its “Trending” tab, which shows users the most popular videos on the site. By the time it was removed from the site, it had more than 200,000 views. The problem? The video promoted the conspiracy theory peddled by alt-right propagandists that Parkland, Florida high school student and shooting survivor David Hogg is an actor, “bought and paid by CNN and George Soros.” The conspiracy theory also found its way into a trending position on Facebook, where clicking Hogg’s name “brought up several videos and articles promoting the conspiracy that he’s a paid actor,” according to Business Insider.
The incident highlights the speed at which the spread of false information occurs on algorithmically optimized social media sites that are easy to game. What to do about it is the subject of a new report from the New York think tank Data & Society, “Dead Reckoning: Navigating Content Moderation After ‘Fake News’,” which coincidentally debuted yesterday, just as the Hogg conspiracy theory spread across the internet. Based on a “year of field-based research using stakeholder mapping, discourse and policy analysis, as well as ethnographic and qualitative research of industry groups working to solve ‘fake news’ issues,” the report sets out to define the problem set before offering four strategies for addressing it.




A wake-up slap to California?
Judge says state can't force IMDB to take down actors' ages
A federal judge has blocked a California law that would have forced IMDB to take down actors' ages on request.
The law was signed by Governor Jerry Brown, a Democrat, in September 2016. It was supported by the Screen Actors Guild, which said the law it would help prevent age discrimination in film and television hiring.
IMDB quickly challenged the law in court, saying that it "attempts to combat age discrimination in casting through content-based censorship."
… In his order, Chhabria called the law "clearly unconstitutional." He said it "singles out specific, non-commercial content — age-related information — for differential treatment."
The judge also said that even if the defendants, the state of California and the Screen Actors Guild, demonstrated a casual link between the availability of ages on IMDB and age discrimination, it would not be enough to justify a "content based restriction on IMDB's speech."
Chhabria added that "regulation of speech must be a last resort."




Perspective. Perhaps all politicians are delusional.
Bernie blames Hillary for allowing Russian interference
Bernie Sanders on Wednesday blamed Hillary Clinton for not doing more to stop the Russian attack on the last presidential election. Then his 2016 campaign manager, in an interview with POLITICO, said he’s seen no evidence to support special counsel Robert Mueller's assertion in an indictment last week that the Russian operation had backed Sanders' campaign.
The remarks showed Sanders, running for a third term and currently considered a front-runner for the Democratic presidential nomination in 2020, deeply defensive in response to questions posed to him about what was laid out in the indictment. He attempted to thread a response that blasts Donald Trump for refusing to acknowledge that Russians helped his campaign — but then holds himself harmless for a nearly identical denial.




Again I suggest that Amazon buy the USPS.
Postal-Service Workers Are Shouldering the Burden for Amazon




Some classes for my students.




It is always thus for new technologies!


Wednesday, February 21, 2018

Any publicity seems to attract the hacker piranhas.
Note: as Catalin Cimpanu points out on Twitter, “Neither RedLock nor Tesla confirmed that “confidential data” was stolen. Tesla said the opposite in their statement. The reporter is going out on a limb on this one.”
Duncan Riley reports:
Elon Musk may be able to send a Tesla Inc. vehicle into space, but apparently his staff can’t secure data online so easily. A shocking report released this morning details the theft of data from the electric car company, blaming it on gross staff incompetency.
According to researchers at cloud security firm RedLock Ltd., hackers infiltrated Tesla’s Kubernotes console after the company failed to secure it with a password. Within one of the Kubernetes pods, a group of software containers deployed on the same host, sat the access credentials to Telsa’s Amazon Web Service Inc. account.
Read more on SiliconAngle.
[From the article:
Because it’s the fashion in 2018, the hackers then installed cryptomining software, including sophisticated evasion measures to hide the installation.




A “How To” article that allows us to consider “How To Avoid!”
Phishing schemes net hackers millions of dollars from Fortune 500
On Wednesday, researchers from IBM's X-Force Incident Response and Intelligence Services (IRIS) team said the Business Email Compromise (BEC) scheme is currently active and is successfully targeting Accounts Payable (AP) teams at Fortune 500 companies.
In a blog post, the researchers said that after discovering evidence of the threat in Fall 2017, their analysis of the campaign led them to Nigeria, where the threat actors appear to be operating.
The BEC uses social engineering attacks and phishing emails in order to obtain legitimate credentials for enterprise networks and email accounts.
In many cases, publicly available information is used to craft messages which appeared legitimate and entice phishing victims to visit malicious domains.
… This BEC is of special note as no malware was used and as legitimate employees were conducting transactions, traditional security products and protocols would not be able to detect any compromise.




From the White House! So you know it can’t be “fake news.”
CEA Report: The Cost of Malicious Cyber Activity to US Economy
[February 16, 2018] “the Council of Economic Advisers (CEA) released a report detailing the economic costs of malicious cyber activity on the U.S. economy. Please see below for the executive summary and read the full report here. This report examines the substantial economic costs that malicious cyber activity imposes on the U.S. economy. Cyber threats are ever-evolving and may come from sophisticated adversaries.
  • We estimate that malicious cyber activity cost the U.S. economy between $57 billion and $109 billion in 2016.
  • Cybersecurity experts like to say that in an act of war or retaliation, the first moves will be made in cyberspace. A cyber adversary can utilize numerous attack vectors simultaneously. The backdoors that were previously established may be used to concurrently attack the compromised firms for the purpose of simultaneous business destruction.




For our discussion of Law & Regulation.
The Laws and Ethics of Employee Monitoring
… Federal and most state privacy laws give discretion to employers as to how far they can go with their employee monitoring. In some cases, employers do not have to inform employees of the monitoring, but this depends on state and local laws. Some locations require employee consent to monitor.
"As a general rule, employees have little expectation of privacy while on company grounds or using company equipment, including company computers or vehicles," said Matt C. Pinsker, adjunct professor of homeland security and criminal justice at Virginia Commonwealth University.
Monitoring must be within reason. For example, video surveillance can be conducted in common areas and entrances; however, it should be obvious that surveillance in bathrooms or locker rooms is prohibited and can open a company up to legal repercussions.


Tuesday, February 20, 2018

The Bank of Bangladesh hack showed how this could be done. I wonder if this is the same team of hackers or have they inspired copycats? Did these bank fail to make the security changes SWIFT recommended?
Malicious hackers attempted to steal millions of dollars from banks in Russia and India by abusing the SWIFT global banking network.
A report published last week by Russia’s central bank on the types of attacks that hit financial institutions in 2017 revealed that an unnamed bank was the victim of a successful SWIFT-based attack.
A copy of the report currently posted on the central bank’s website does not specify how much the hackers stole, but Reuters said they had managed to obtain 339.5 million rubles (roughly $6 million).
The news comes after Russia’s Globex bank admitted in December that hackers had attempted to steal roughly $940,000 through the SWIFT system. The attackers reportedly only managed to steal a fraction of the amount they targeted.
In India, City Union Bank issued a statement on Sunday saying that it had identified three fraudulent transfers abusing the SWIFT payments messaging system. One transfer of $500,000 through a Standard Chartered Bank account in New York to a bank in Dubai was blocked and the money was recovered.
The second transfer of €300,000 ($372,000) was made to an account at a bank based in Turkey via a Standard Chartered Bank account in Germany. The funds were blocked at the Turkish bank and City Union hopes to recover the money.
The third transfer was for $1 million and it went to a Chinese bank through a Bank of America account. City Union Bank said the funds were claimed by someone using forged documents.




How close are we to the straw that breaks the camel’s back?
North Korea poised to launch large-scale cyberattacks, says new report
North Korea is quietly expanding both the scope and sophistication of its cyberweaponry, laying the groundwork for more devastating attacks, according to a new report published Tuesday.
… Now it appears that North Korea has also been using previously-unknown holes in the Internet to carry out cyberespionage — the kinds of activities that could easily metamorphose into full-scale attacks, according to a report from FireEye, the California-based cybersecurity company.
… The Worldwide Threat Assessment published by the U.S. intelligence community last week forecast the potential for surprise attacks in the cyber realm would increase over the next year.




Surprise! Someone used your identity to launder money. Have fun explaining that to the Feds.
Money Laundering Via Author Impersonation on Amazon?
Patrick Reames had no idea why Amazon.com sent him a 1099 form saying he’d made almost $24,000 selling books via Createspace, the company’s on-demand publishing arm. That is, until he searched the site for his name and discovered someone has been using it to peddle a $555 book that’s full of nothing but gibberish.




Biometrics Can do more than identify you by scanning your face. Should we allow it to? This is similar to those driver analyzing dongles insurance companies put in cars. A look into your eyes could increase your health insurance rates?
Google’s new AI algorithm predicts heart disease by looking at your eyes
Scientists from Google and its health-tech subsidiary Verily have discovered a new way to assess a person’s risk of heart disease using machine learning. By analyzing scans of the back of a patient’s eye, the company’s software is able to accurately deduce data, including an individual’s age, blood pressure, and whether or not they smoke. This can then be used to predict their risk of suffering a major cardiac event — such as a heart attack — with roughly the same accuracy as current leading methods.
The algorithm potentially makes it quicker and easier for doctors to analyze a patient’s cardiovascular risk, as it doesn’t require a blood test. But, the method will need to be tested more thoroughly before it can be used in a clinical setting. A paper describing the work was published today in the Nature journal Biomedical Engineering, although the research was also shared before peer review last September.




A question: Is this bad or merely an evolution similar to the introduction of radio and then TV? Perhaps older forms of journalism need to evolve?
CJS – The Facebook Armageddon
Columbia Journalism Review: The social network’s increasing threat to journalism – “At some point over the past decade, Facebook stopped being a mostly harmless social network filled with baby photos and became one of the most powerful forces in media—with more than 2 billion users every month and a growing lock on the ad revenue that used to underpin most of the media industry. When it comes to threats to journalism, in other words, Facebook qualifies as one, whether it wants to admit it or not… The fact that even Facebook’s closest media partners like BuzzFeed are struggling financially highlights the most obvious threat: Since many media companies still rely on advertising revenue to support their journalism, Facebook’s increasing dominance of that industry poses an existential threat to their business models…”




An interesting question: Can you duplicate an algorithm? Since these algorithms are Trade Secrets (not patented or copyrighted) there is no problem disclosing how they work?
Facebook is a political battleground where Russian operatives work to influence elections, fake news runs rampant, and political hopefuls use ad targeting to reach swing voters. We have no idea what goes on inside Facebook’s insidious black box algorithm, which controls the all-powerful News Feed. Are politicians playing by the rules? Can we trust Facebook to police them? Do we really have any choice?
One emerging way to hold tech companies like Facebook accountable is to use similar technology to figuratively poke at that black box, gathering data and testing hypotheses about what might be going on inside, almost like early astronomers studying the solar system.
It’s a tactic being pioneered at the nonprofit news organization ProPublica by a team of reporters, programmers, and researchers led by Pulitzer Prize-winning reporter Julia Angwin. Angwin’s team specializes in investigating algorithms that impact people’s lives, from the Facebook News Feed to Amazon’s pricing models to the software determining people’s car insurance payments and even who goes to prison and for how long. To investigate these algorithms, they’ve had to develop a new approach to investigative reporting that uses technology like machine learning and chatbots.


(Related) If Russia was not bringing its “A” game last time, will we be ready for it this time?
Russia's Troll Operation Was Not That Sophisticated
It might be nice for Democrats and #NeverTrumpers to believe that Russia’s troll factory brought Donald Trump the 2016 Presidential Election.
But no.
Special Counsel Robert Mueller’s indictment of 13 Russians associated with the Internet Research Agency definitively shows, given current evidence, that while a small team in St. Petersburg ran a successful audience-development campaign mostly on behalf of Trump, that campaign was neither targeted nor sizable enough to change the election’s result.
Make no mistake: This was self-described and actual “information warfare.” The point was to sow discord and distrust in the American electorate. And with a few dozen people—around 80 at the peak—they managed to reach 150 million people through Facebook and Instagram. In September 2016, the indictment states that the monthly budget of the unit that contained the U.S. election-interference operation was $1.25 million. That’s pretty good bang for the buck.


(Related) Clearly, Russia is poised to take any advantage we offer…
After Florida School Shooting, Russian ‘Bot’ Army Pounced
One hour after news broke about the school shooting in Florida last week, Twitter accounts suspected of having links to Russia released hundreds of posts taking up the gun control debate.
The accounts addressed the news with the speed of a cable news network. Some adopted the hashtag #guncontrolnow. Others used #gunreformnow and #Parklandshooting. Earlier on Wednesday, before the mass shooting at Marjory Stoneman Douglas High School in Parkland, Fla., many of those accounts had been focused on the investigation by the special counsel Robert S. Mueller III into Russian meddling in the 2016 presidential election.
“This is pretty typical for them, to hop on breaking news like this,” said Jonathon Morgan, chief executive of New Knowledge, a company that tracks online disinformation campaigns. “The bots focus on anything that is divisive for Americans. Almost systematically.”




Perspective. Rather clunky infographic, but the voice trend is important.
20% of All Searches are Made with Voice (INFOGRAPHIC)
A new and very interactive infographic by Adzooma takes a look at how online advertising will be trending in 2018. And one of the data points is the growth of voice search, which now makes up 20 percent of inquiries on Google’s mobile app and Android devices.




A very interesting tool.
Tetra’s call recorder and AI-powered transcription app now works for inbound calls
… what if there was a way for you to record a call through your mobile phone and have a full transcription of the discussion delivered to you within minutes? That’s exactly what San Francisco-based Tetra is setting out to enable with its AI-powered iPhone app that not only records your calls but converts the conversations into written form using deep learning and natural language processing (NLP).
… So far, Tetra has only worked with outbound calls, but now subscribers will be able to enjoy the full benefits of Tetra for incoming calls, too.
By way of a quick recap, Tetra is basically a VoIP app that works similarly to Google Voice, insofar as it allocates you a dedicated Tetra number that must be used for all outgoing/incoming calls. Once a call is complete, Tetra will spend a short period of time generating the notes.
… In terms of pricing, everyone can get 60 free minutes per month as part of a trial. Then you’ll have to sign up to the Plus, Pro, or Business plans, which offer varying amounts of call-time per month and range from $9 to $99.
… Then there are the legal and ethical angles to consider. By default, Tetra automatically tells the people on the other end of the call that they are being recorded, however it’s possible for the Tetra subscriber to disable this announcement with the proviso that you “stay compliant with local law or get recording consent yourself,” according to Tetra.


Monday, February 19, 2018

Now I can insult anyone and the evidence deletes itself?
Obliviate is a new app from MakeUseOf that lets you send self-destructing messages. It’s great for sharing secret messages with friends that you don’t want sticking around on their phone, among other use cases.
Download: obliviate for Android and iOS.
… The app lets you set a timer between 5 and 180 seconds for how long your messages will last. Once the recipient opens it, the message will disappear after a set time. And if you change your mind, you can immediately obliviate messages and bypass the timer.
Best of all, the obliviate is free and has no ads; never will. Plus, you cannot take screenshots in the app or copy the content of the messages (this feature is currently available on Android only, coming soon on iOS). This prevents others from recording messages you intended to be private.
… Coming soon, obliviate hopes to add encryption, support for audio, pictures, and videos, custom notification sounds, and more! We hope you enjoy the app.




Interesting that the parents accept this.
AP reports:
A private school in east Georgia intends to start drug-testing its oldest students.
The Columbus Ledger-Enquirer reports that Brookstone School in Columbus recently announced that the drug-testing of students in grades 8-12 will be voluntary next school year — and then mandatory in succeeding years.
Read more on Ledger-Enquirer.
And yes, of course they can get away with doing that as a precondition of acceptance or attendance. They’re a private school. But here’s the thing: parents are waiving their children’s privacy rights. Now I know a lot of parents are just fine with that because they want to know if their child is using drugs. And somewhere, I’m guessing, this school actually/hopefully has a written policy about what happens with the results, for how long they are retained, and with whom they might be shared. And what is the testing facility’s privacy policy? Will they be sent the students’ names as identifiers or just numbers/IDs? And who might they share results with and under what circumstances?
Much to think about here….




Another “Business Continuity” angle for my students to discuss.
Most KFCs in UK remain closed because of chicken shortage
The fast food chain KFC has been forced to temporarily close most of its UK outlets after problems with a new delivery contract led to a chicken shortage.
… The chicken delivery problem is so severe that the company cannot say when operations will be back to normal. But it said it was working “flat out” to resolve the crisis.
… In a statement it blamed the chicken shortage on a contract with delivery company DHL.




An interesting tool. Now, how do we apply it?
Perform Text Analysis with IBM Watson and Google Docs
Google, Microsoft, IBM and Amazon have made it easier for developers to add human cognitive capabilities (also known as artificial intelligence) within their own applications. You need not be a machine learning expert to build a computer program that can recognize objects in photographs, or one that transforms human speech to text or even a chatbot that converses with people in natural language.




Perhaps a metaphor for the Trump Administration?


Sunday, February 18, 2018

A reminder: Just because we rarely see their name in the list of ‘usual suspects’ does not mean they aren’t capable.
Saudi foreign minister calls Iran most dangerous nation for cyber attacks
… Asked who he believed was the most dangerous nation in terms of cyber attacks and Al-Jubeir was unequivocal.
"The most dangerous nation behind cyber attacks? Iran," Al-Jubeir said.
"Iran is the only country that has attacked us repeatedly and tried to attack us repeatedly. In fact they tried to do it on a virtually weekly basis."
… Last September, the U.S. Treasury Department added two Iran-based hacking networks and eight individuals to a U.S. sanctions list, accusing them of taking part in cyber-enabled attacks on the U.S. financial system in 2012 and 2013, Reuters reported.


(Related) Our allies have some skills too.
… The hack had targeted Belgacom, Belgium’s largest telecommunications provider, which serves millions of people across Europe. The company’s employees had noticed their email accounts were not receiving messages. On closer inspection, they made a startling discovery: Belgacom’s internal computer systems had been infected with one of the most advanced pieces of malware security experts had ever seen.
As The Intercept reported in 2014, the hack turned out to have been perpetrated by U.K. surveillance agency Government Communications Headquarters, better known as GCHQ. The British spies hacked into Belgacom employees’ computers and then penetrated the company’s internal systems. In an eavesdropping mission called “Operation Socialist,” GCHQ planted bugs inside the most sensitive parts of Belgacom’s networks and tapped into communications processed by the company.




For my future managers: How do you fail to notice that you only sent 100,000 letters to notify 600,000 people? I would never call this a programming error, the program correctly did what the manager asked it to do.
Jack Corrigan reports:
A programming error kept the IRS from notifying hundreds of thousands of identity theft victims about criminals using their Social Security numbers to get themselves jobs in 2017, according to an internal investigation.
Last year, more than half a million Americans had their identities used by others to get hired, but only first-time victims received a notification from the IRS, the Treasury Inspector General for Tax Administration found. As a result, nearly 460,000 previous victims of employment identity theft were left in the dark about their information getting stolen yet again.
“Most identified victims remain unaware that their identities are being used by other individuals for employment,” TIGTA wrote in its report.
Read more on NextGov.




For my “Why you need a lawyer” lecture.
Revision Legal has a post about insider leaks. The article starts by discussing the Morrisons case in the UK, where an employee vindictively leaked data. In a ruling that surprised many, the court held that although Morrisons was a victim of their employee, other employees who sued Morrisons could hold Morrisons liable:
This creates, in effect, a form a strict liability for an employee data leak (at least in the UK). If the ruling is upheld, Morrisons will face a massive legal liability and, without question, the remaining 94,500 employees will join the class action or file their own lawsuits. Further, it is possible that British regulators will follow the court’s ruling and impose heavy regulatory fines and penalties.
The article then turns to legal principles in the U.S. that would relate to holding an employer liable for an intentional leak by an employee. As the authors note, it’s “complicated.”
Read more on JDSupra.




Just in time for the chapter on Law & Regulation.
David M. Stauss and Gregory Szewczyk of Ballard Spahr LLP write:
As we first reported in our January 22, 2018, alert, the Colorado legislature is considering legislation that, if enacted, would significantly change Colorado privacy and data security law. On Wednesday, February 14, 2018, the bill’s sponsors submitted an amended bill that addresses issues raised by numerous stakeholders, including Ballard Spahr. The amended bill also was heard before the House Committee on State, Veterans, and Military Affairs, where it was unanimously approved.
The most significant changes are highlighted below.
Read more on The National Law Review. And yes, read more, as the state statute has some interesting overlap but also differences between the proposed state law and HIPAA and GLBA. And if adopted, HIPAA-covered entities would no longer have a 60-day window from discovery to notify – they might have only 30 days.




Now we have to depend on the Postal Service to safeguard the elections? So I have to get a code for Facebook before I can place an ad like “Bob for President.” Can I get that code now? I don’t want to wait until Russia send me the text of the ad they want me to run. (Let’s hope no one else reads this “secret” code that is written on the postcard!)
Facebook plans to use U.S. mail to verify IDs of election ad buyers
Facebook Inc will start using postcards sent by U.S. mail later this year to verify the identities and location of people who want to purchase U.S. election-related advertising on its site, a senior company executive said on Saturday.
… The process of using postcards containing a specific code will be required for advertising that mentions a specific candidate running for a federal office, Katie Harbath, Facebook’s global director of policy programs, said. The requirement will not apply to issue-based political ads, she said.
“If you run an ad mentioning a candidate, we are going to mail you a postcard and you will have to use that code to prove you are in the United States,” Harbath said at a weekend conference of the National Association of Secretaries of State, where executives from Twitter Inc and Alphabet Inc’s Google also spoke.
“It won’t solve everything,” Harbath said in a brief interview with Reuters following her remarks.
But sending codes through old-fashioned mail was the most effective method the tech company could come up with to prevent Russians and other bad actors from purchasing ads while posing as someone else, Harbath said.