Saturday, September 28, 2019


Who do you want to win and by how much?
Researchers easily breached voting machines for the 2020 election
The voting machines that the US will use in the 2020 election are still vulnerable to hacks. A group of ethical hackers tested a bunch of those voting machines and election systems (most of which they bought on eBay). They were able to crack into every machine, The Washington Post reports. Their tests took place this summer at a Def Con cybersecurity conference, but the group visited Washington to share their findings yesterday.
A number of flaws allowed the hackers to access the machines, including weak default passwords and shoddy encryption. The group says the machines could be hacked by anyone with access to them, and if poll workers make mistakes or take shortcuts, the machines could be infiltrated by remote hackers.




Is this an overreaction?
New federal rules limit police searches of family tree DNA databases
The U.S. Department of Justice (DOJ) released new rules yesterday governing when police can use genetic genealogy to track down suspects in serious crimes—the first-ever policy covering how these databases, popular among amateur genealogists, should be used in law enforcement attempts to balance public safety and privacy concerns.
But these searches also raise privacy concerns. Relatives of those in the database can fall under suspicion even if they have never uploaded their own DNA. (One study found that 60% of white Americans can now be tracked down using such searches .) And even those who have shared their DNA may not have given informed consent to allow their data to be used for law enforcement searches.
The policy says “forensic genetic genealogy” should generally be used only for violent crimes such as murder and rape, as well as to identify human remains. (The policy permits broader use if the ancestry database’s policy allows such searches.) Police should first exhaust traditional crime solving methods, including searching their own criminal DNA databases.
The policy also bars police from using a suspect’s DNA profile to look for genes related to disease risks or psychological traits. [??? Bob]




Lawyers for the defense?
Five Key Considerations to Developing Defensible AI
In a recent survey published by Forbes, 91% of enterprises said they expect AI to deliver new business growth in the next five years.
The rapidly expanding implementation of artificial intelligence in society means that, inevitably, lawyers will be faced with putting artificial intelligence on the witness stand.
… What should companies implementing AI solutions do now to ensure that they can effectively defend their use of AI when the machine becomes the witness? This article offers some thoughts on developing AI in a defensible way in order to be prepared should the AI system itself become the focal point of a lawsuit.
Pay Attention to the Data Used to Train Your AI
Understand the Black Box
Develop AI in an Ethically Responsible Way
Be Mindful of Privacy Regulations
Always Look to Improve Results




Worth a look!
Get Your Copy of the Free Practical Ed Tech Handbook
Last Sunday I published the updated 2019-20 version of my popular Practical Ed Tech Handbook.
Learning to Program
Augmented and Virtual Reality
Video creation and flipped lessons



Friday, September 27, 2019


Remember, some hackers can read.
GAO Identifies Significant Cybersecurity Risks in US Electric Grid
The new report released by the Government Accountability Office (GAO) reveals that the nation’s electric grid is becoming more vulnerable to cyberattacks.


(Related)
U.S. Navy to Appoint Cyber Chief Following a Blistering Audit
… The new position is part of a broader effort to improve cybersecurity in the Navy and among its private-sector industry partners, coming after a scathing internal audit earlier this year found that repeated compromises of national-security secrets threatened the U.S.’s standing as the world’s top military power.




For my security students: know the enemy!




Before this one becomes law, lets design a tougher one!
CCPA 2.0? A New California Ballot Initiative is Introduced
On September 13, 2019, the California State Legislature passed the final CCPA amendments of 2019. Governor Newsome is expected to sign the recently passed CCPA amendments into law in advance of his October 13, 2019 deadline. Yesterday, proponents of the original CCPA ballot initiative released the text of a new initiative (The California Privacy Rights and Enforcement Act of 2020) that will be voted on in the 2020 election; if passed, the initiative would substantially expand CCPA’s protections for consumers and obligations on businesses. While the new proposal preserves key aspects the current CCPA statute, there are some notable additions and amendments.




No surprise.
Most Companies Worldwide Are Not Prepared for New Privacy Regulations
As new privacy regulations continue to appear globally, there is mounting evidence to suggest that most organizations – regardless of size or type of business – are unprepared to deal with them in an effective manner. That’s one of the big takeaways from a recent September 2019 report from the Internet Society’s Online Trust Alliance (OTA), which analyzed more than 1,200 privacy statements from organizations around the world to see how well they adhered to new privacy regulations such as the European General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and Canada’s Personal Information Protection and Electronic Document Act (PIPEDA).
In many ways, the enactment of the European GDPR in May 2018 set into motion an entirely new approach to data privacy and data security. This paradigm shift included new thinking about how to manage the enormous flows of data passing into and out of organizations on a daily basis, and how to report this information to customers and users. And it also established the fact that organizations would have to start dealing with enormous fines and penalties for any lapses in data privacy or any data security breaches.




From devices to algorithms.
FDA clarifies how it will regulate digital health and artificial intelligence
In releasing the guidelines on mobile health software and CDS tools, the FDA is attempting to communicate how it will implement provisions within the 21st Century Cures Act, a law passed in late 2016 that sought to exempt several categories of health software from FDA review. The legislation gave FDA discretion to determine which specific products will fall under its purview.




A not very scientific assumption.
Don’t Fear the Terminator
Artificial intelligence never needed to evolve, so it didn’t develop the survival instinct that leads to the impulse to dominate others
Takeover by AI has long been the stuff of science fiction. In 2001: A Space Odyssey, HAL, the sentient computer controlling the operation of an interplanetary spaceship, turns on the crew in an act of self-preservation. In The Terminator, an Internet-like computer defense system called Skynet achieves self-awareness and initiates a nuclear war, obliterating much of humanity. This trope has, by now, been almost elevated to a natural law of science fiction: a sufficiently intelligent computer system will do whatever it must to survive, which will likely include achieving dominion over the human race.
To a neuroscientist, this line of reasoning is puzzling. There are plenty of risks of AI to worry about, including economic disruption, failures in life-critical applications and weaponization by bad actors. But the one that seems to worry people most is power-hungry robots deciding, of their own volition, to take over the world. Why would a sentient AI want to take over the world? It wouldn’t.




Perspective. No need to plug in. No need to change batteries.
Photovoltaic-powered sensors for the 'Internet of Things'
By 2025, experts estimate the number of Internet of Things devices—including sensors that gather real-time data about infrastructure and the environment—could rise to 75 billion worldwide.
… MIT researchers have designed photovoltaic-powered sensors that could potentially transmit data for years before they need to be replaced. To do so, they mounted thin-film perovskite cells—known for their potential low cost, flexibility, and relative ease of fabrication—as energy-harvesters on inexpensive radio-frequency identification (RFID) tags.




Perspective.
Internet sector contributes $2.1 trillion to U.S. economy: industry group
The rapidly growing internet sector accounted for $2.1 trillion of the U.S. economy in 2018, or about 10% of the nation’s gross domestic product (GDP), an industry group said on Thursday.
… The study says the internet sector represents the fourth largest sector of the U.S. economy, behind real estate, government and manufacturing.




Something to watch for…
This Website Will Turn Wikipedia Articles Into “Real” Academic Papers
BuzzFeedNews – “The digital product agency MSCHF released a site called M-Journal on Tuesday that will turn any Wikipedia article into a “real” academic article. You can screenshot it, you can cite it — and you can send a link to your teacher. What MSCHF did was republish the entirety of Wikipedia under its own academic journal. If you go over to the site, you can search any Wikipedia article or paste in a link, and it’ll generate a citation that refers to MSCHF’s M-Journal, not Wikipedia…”



Thursday, September 26, 2019


Trying to explain CCPA to my students.
Ready, Set, Sustain: Six Steps Toward CCPA Compliance
The California Consumer Privacy Act (CCPA) is the first major piece of United States privacy legislation, but it won’t be the last. There are already similar bills in the works in Washington, Hawaii, Massachusetts, New Mexico, Rhode Island and Maryland. Introduced on June 28, 2018, the CCPA adopts much of its framework from the European Union General Data Protection Regulation (GDPR) – although there are some subtle differences. For example, the CCPA extends its protections to households and devices, not just individuals, and includes the right to opt-out of the sale of personal information.
Our research suggests a lot of companies were blindsided by how much time and money it takes to sustain compliance. With less than four months until the California Consumer Privacy Act goes into effect on January 1, 2020, this article provides actionable steps for how to work toward a sustainable compliance program.


(Related)
ALI Data Privacy: Overview and Black Letter Text — Available for Download
Professor Paul Schwartz and I have posted the black letter text of the American Law Institute (ALI), Principles of the Law, Data Privacy. Professor Paul Schwartz and I were co-reporters on the project. Earlier this year, I wrote a post about our completion of the project. According to the ALI press release: “The Principles seek to provide a set of best practices for entities that collect and control data concerning individuals and guidance for a variety of parties at the federal, state, and local levels, including legislators, attorneys general, and administrative agency officials.”




Probably as much insight as we can find.
What's New In Gartner's Hype Cycle For AI, 2019
Gartner’s definition of Hype Cycles includes five phases of a technology’s lifecycle and is explained here. Gartner’s latest Hype Cycle for AI reflects the growing popularity of AutoML, intelligent applications, AI platform as a service or AI cloud services as enterprises ramp up their adoption of AI.


(Related) Voice is being added to every ‘Internet of Things’ thing.
Alexa's 'Certified for Humans' wants to eliminate smart-home headaches
Amazon wants you to be able to set up your smart home even if you don't know anything about tech.
… To achieve the certification, devices need to use frustration-free setup. This allows Alexa to share your Wi-Fi credentials so you don't have to renter the info. The device needs to allow over-the-air updates to happen in the background. Finally, you need to be able to setup and control the devices with the Alexa app.


(Related)
Ring announces new cameras and a conversational doorbell
At its fall event today, Amazon announced two new versions of its Ring Stick Up security cameras and showed off a way for Alexa to have conversations with people who come to your door via a Ring doorbell.


(Related)
Amazon’s Echo Frames are eyeglasses with Alexa




Perspective.
The Internet Leads in Advertising by a Crazy Wide Margin
Zenith Media's Advertising Expenditure Forecasts June 2019 covers the world, looks at data from 2007 to 2018, and forecasts it clearly: in a couple of years, digital/online ad spending will account for 52 percent of all the dollars spent to get your attention.
TV is a distant second place at 27 percent of advertising spending projected for 2021. After that comes an even more precipitous drop. The third place goes to outdoor ads! That means billboards will get more money spent on them than newspapers (6 percent), radio (5 percent), or magazines (3 percent).




PowerPoint is unlikely to rise above ‘tolerable.”
Microsoft launches its AI presentation coach for PowerPoint
A few months ago, Microsoft announced that PowerPoint would soon get an AI-powered presentation coach that could help you prepare for that important next presentation by giving you immediate feedback. Today, the company is launching this new tool, starting with the web version of PowerPoint.
The new PowerPoint Presentation Coach aims to take the hassle out of practicing. In its current version, the tool looks at three things: pace, slide reading and word choice. Pace is pretty self-explanatory and looks at how fast or slow somebody is speaking. The “slide reading” feature detects when you are simply reading the words from your slides word for word. Nobody wants to sit through that kind of presentation. The “word choice” tool doesn’t just detect how often you say “um,” “ah,” “actually” or “basically,” it also gives you feedback when you are using culturally insensitive phrases like “you guys” or “best man for the job.”



Wednesday, September 25, 2019


Bringing us doom, gloom, and a President who polarizes the country?
Russian Secret Weapon Against U.S. 2020 Election Revealed In New Cyberwarfare Report
The FBI has warned that “the threat” to U.S. election security “from nation-state actors remains a persistent concern,” that it is “working aggressively” to uncover and stop, and the U.S. Director of National Intelligence has appointed an election threats executive, explaining that election security is now “a top priority for the intelligence community—which must bring the strongest level of support to this critical issue.”
With this in mind, a new report from cybersecurity powerhouse Check Point makes for sobering reading. “It is unequivocally clear to us,” the firm warns, “that the Russians invested a significant amount of money and effort in the first half of this year to build large-scale espionage capabilities. Given the timing, the unique operational security design, and sheer volume of resource investment seen, Check Point believes we may see such an attack carried out near the 2020 U.S. Elections.”
And the most chilling finding is that Russia has built its ecosystem to ensure resilience, with cost no object. It has formed a fire-walled structure designed to attack in waves. Check Point believes this has been a decade or more in the making and now makes concerted Russian attacks on the U.S. “almost impossible” to defend against.
… It’s known and accepted within the U.S. security community that the elections will almost certainly come under some level of attack. But the findings actually point to something much more sinister. A cyber warfare platform that does carry implications for the election—but also for power grids, transportation networks, financial services.
[An alternate link to the report: https://www.intezer.com/blog-russian-apt-ecosystem/


(Related) How can this possibly help?
Facebook promises not to stop politicians’ lies & hate
Facebook confirms it won’t fact check politicians’ speech or block their content if it’s newsworthy even if it violates the site’s hate-speech rules or other policies. This cementing of its policy comes from Facebook’s head of global policy and communication Nick Clegg, who gave a speech today about Facebook’s plans to prevent interference in the 2020 presidential election.




We need security training that is as frequent and as habit forming as dealing with large volumes of email.
Webroot Report: Nearly Half of Employees Confess to Clicking Links in Potential Phishing Emails at Work
BROOMFIELD, Colo., Webroot, a Carbonite (CARB) company, released a report, Hook, Line and Sinker: Why Phishing Attacks Work, that sheds light on psychological factors impacting an individual's decision to click on a phishing email.
While a majority (79%) of people reported being able to distinguish a phishing message from a genuine one, nearly half (49%) also admit to having clicked on a link from an unknown sender while at work. Further, nearly half (48%) of respondents said their personal or financial data had been compromised by a phishing message. However, of that group more than a third (35%) didn't take the basic step of changing their passwords following a breach. Not only is this false confidence potentially harmful to an employee's personal and financial data, but it also creates risks for companies and their data.




Compliance is not always what was intended. (And some companies are smarter than some countries?)
Google refuses to pay publishers in France
Google will not pay press publishers in France to display their content and will instead change the way articles appear in search results, a senior executive said on Wednesday.
The announcement pours cold water on publishers' hopes of obtaining more money from the tech giant for displaying their content under the European Union's new copyright regime, which France was the first to transpose into national law.
To apply the new copyright rules in France, Google will instead change the way news results appear on its search engine by removing so-called snippets, or short excerpts from the article.
"When the French law comes into force, we will not show preview content in France for a European news publication unless the publisher has taken steps to tell us that's what they want," the tech giant said in a separate blog post.
Hyperlinks and "very short extracts" of press articles are not covered by the neighboring right, meaning Google can display them on the platforms without signing a licensing agreement.
In Germany, where a neighboring right existed prior to the EU directive, many German publishers decided to give Google their content for free after their traffic plummeted when snippets no longer appeared on search results.




My AI can file more patent applications than your AI!
AI at the USPTO
The USPTO has extended its public comment period on the subject of patenting artificial intelligence inventions. Due Date: October 11, 2019 now November 8, 2019.




I suspect they are far too optimistic, but them I’m a pessimist by training.
Explainable AI: Bringing trust to business AI adoption
… At the center of this problem is a technical question shrouded by myth. There's a widely held belief out there today that AI technology has become so complex that it's impossible for the systems to explain why they make the decisions that they do. And even if they could, the explanations would be too complicated for our human brains to understand.
The reality is that many of the most common algorithms used today in machine learning and AI systems can have what is known as “explainability” built in. We're just not using it — or are not getting access to it. For other algorithms, explainability and traceability functions are still being developed, but aren't far out.
Here you will find what explainable AI means, why it matters for business use, and what forces are moving its adoption forward — and which are holding it back.


(Related)
Fiddler raises $10.2 million for AI that explains its reasoning
Explainable AI, which refers to techniques that attempt to bring transparency to traditionally opaque AI models and their predictions, is a burgeoning subfield of machine learning research. It’s no wonder — models sometimes learn undesirable tricks to accomplish goals on training data, or they develop biases  with the potential to cause harm if left unaddressed.
That’s why Krishna Gade and Amt Paka founded Fiddler, a Mountain View, California-based startup developing an “explainable” engine that’s designed to analyze, validate, and manage AI solutions.




I don’t get it. Perhaps I’ll learn eventually?
New AI Systems Are Here to Personalize Learning
Ahura AI is developing a product to capture biometric data [sic] from adult learners who are using computers to complete online education programs. The goal is to feed this data to an AI system that can modify and adapt their program to optimize for the most effective teaching method.
Currently, Ahura’s system uses the video camera and microphone that come standard on the laptops, tablets, and mobile devices most students are using for their learning programs.
With the computer’s camera Ahura can capture facial movements and micro expressions, measure eye movements, and track fidget score (a measure of how much a student moves while learning). The microphone tracks voice sentiment, and the AI leverages natural language processing to review the learner’s word usage.
From this collection of data Ahura can, according to Talebi, identify the optimal way to deliver content to each individual.




You want to write right, right?
Grammarly uses AI to detect the tone and tenor of your writing
Nailing the right tone and tenor is of critical importance where writing’s concerned, whether the audiences of said writing are friends, coworkers, or hiring managers. The trouble is, without an extra pair of eyes, it’s rarely easy to know whether your work will have the intended effect.
Fortunately, Grammarly, a developer of cloud-hosted online grammar checking and plagiarism detection tools, has developed a tone detector the company claims can identify subtle contextual clues conveying a range of tempers. Essentially, it taps a battery of hard-coded rules and machine learning algorithms to spot signals in a piece contributing to its tone, including word choice, phrasing, punctuation, and even capitalization.



Tuesday, September 24, 2019


A good month to teach Computer Security.
National Cybersecurity Awareness Month 2019 Theme: ‘Own IT. Secure IT. Protect IT’




Could this become a model law? Please?
Dell Cameron reports:
Starting next Tuesday, Nevada residents may choose to opt-out of having their personal information resold by online businesses. A privacy bill, signed into law this May, requires website operators to respond to requests from consumers and halt the sale of their personal information within 60 days—or potentially face strict fines.
Read more on Gizmodo.




What could possibly go wrong?
Joe Cadillic writes:
It is hard to imagine a more intrusive home surveillance device than a faucet or toilet that listens to everyone’s conversations, but that is just what Delta Faucet and Kohler have done.
Delta Faucet’s “Voice IQ” takes advantage of where lots of people like to congregate and turns it into an Alexa eavesdropping center.
Designed with the understanding that 20 percent of all WiFi-enabled homes are equipped with a connected home device, VoiceIQ Technology pairs with existing devices to dispense the exact amount of water needed, all with a simple voice command.”
Delta lets Alexa decide how much water everyone gets.
VoiceIQ Technology allows users to easily warm water and turn it on and off with voice activation, lending a hand in an active kitchen space. Consumers can command the faucet to dispense a metered amount of water in various quantities for precise measurement. Additionally, consumers can customize commands to make everyday tasks easier, like filling a coffee pot, a child’s sippy cup, or a dog bowl.” (To learn more about Voice IQ click here. )
What they are really saying is Amazon will now monitor your home and individual water usage.
How is that for Orwellian?
Read more on MassPrivateI.




Today phones, tomorrow screaming children, loose pets and fast food.
Australia rolling out AI cameras to catch and fine smartphone-distracted drivers
Everyone knows it's dangerous to text and drive, and yet smartphones seem an irresistible temptation in traffic for too many people.
If the enforcement hasn't been sufficient to change this behavior, that's about to change. Melbourne-based Acusensus says its distracted driver detection system operates 24/7, in good or bad weather, impervious to sun glare and perfectly capable in darkness. It automatically detects violations using artificial intelligence algorithms, then creates encrypted, traceable, evidence-grade packages that can be used by law enforcement to issue tickets, or take the matter all the way through the court system.
Given that it's both a safety-focused system and one that can reliably haul in revenue for governments, it's reasonable to expect this technology will spread quickly.




GDPR does not (yet) rule the world.
Google wins in 'right to be forgotten' fight with France
Google won its fight against tougher “right to be forgotten” rules after Europe’s top court said on Tuesday it does not have to remove links to sensitive personal data worldwide, rejecting a French demand.
The case is seen as a test of whether Europe can extend its laws beyond its borders and whether individuals can demand the removal of personal data from internet search results without stifling free speech and legitimate public interest.




Automating crime as a business strategy?
The Extended Corporate Mind: When Corporations Use AI to Break the Law
Diamantis, Mihailis, The Extended Corporate Mind: When Corporations Use AI to Break the Law (July 18, 2019). North Carolina Law Review, Vol. 97, Forthcoming . Available at SSRN: https://ssrn.com/abstract=3422429 or http://dx.doi.org/10.2139/ssrn.3422429 Algorithms may soon replace employees as the leading cause of corporate harm. For centuries, the law has defined corporate misconduct — anything from civil discrimination to criminal insider trading — in terms of employee misconduct. Today, however, breakthroughs in artificial intelligence and big data allow automated systems to make many corporate decisions, e.g., who gets a loan or what stocks to buy. These technologies introduce valuable efficiencies, but they do not remove (or even always reduce) the incidence of corporate harm. Unless the law adapts, corporations will become increasingly immune to civil and criminal liability as they transfer responsibility from employees to algorithms. This Article is the first to tackle the full extent of the growing doctrinal gap left by algorithmic corporate misconduct. To hold corporations accountable, the law must sometimes treat them as if they “know” information stored on their servers and “intend” decisions reached by their automated systems. Cognitive science and the philosophy of mind offer a path forward. The “extended mind thesis” complicates traditional views about the physical boundaries of the mind. The thesis states that the mind encompasses any system that sufficiently assists thought, e.g. by facilitating recall or enhancing decision-making. For natural people, the thesis implies that minds can extend beyond the brain to include external cognitive aids, like rolodexes and calculators. This Article adapts the thesis to corporate law. It motivates and proposes a doctrinal framework for extending the corporate mind to the algorithms that are increasingly integral to corporate thought. The law needs such an innovation if it is to hold future corporations to account for their most serious harms…”




Plays on a smartphone only.
Introducing ‘Stealing Ur Feelings,’ an Interactive Documentary About Big Tech, AI, and You
An augmented reality film revealing how the most popular apps can use facial emotion recognition technology to make decisions about your life, promote inequalities, and even destabilize democracy makes its worldwide debut on the web today. Using the same AI technology described in corporate patents, “Stealing Ur Feelings,by Noah Levenson, learns the viewers’ deepest secrets just by analyzing their faces as they watch the film in real-time.




Eventually, we will get this right. Perhaps an AI will help.
10 policy principles needed for artificial intelligence
New policies need to be created for artificial intelligence (AI) in order to govern its use while allowing for innovation, according to the US Chamber's Technology Engagement Center and Center for Global Regulatory Cooperation.




Perspective.
Denver Broncos Are Utilizing AI Scanners To Upgrade The Food And Beverage Experience




Just don’t Tweet like the President, just don’t.
Twitter 101 for Lawyers
Law Technology Today: “The recent 2019 ABA Profile of the Legal Profession” report says that only 25% of lawyers personally use or maintain a presence on Twitter for professional purposes. That same report also states that only 14% of law firms use Twitter (down from a high of 21% in 2016). My business clients—who in my opinion, are the greatest sales professionals on the planet—like to use a word known as “whitespace” to describe new potential opportunities with customers. I believe that Twitter is “whitespace” for lawyers. Embracing Twitter provides lawyers with a highly powerful—and free—opportunity to learn, build your organization’s brand, build your personal brand and develop relationships. In this article, we explore some strategies for you to do so… The beauty of Twitter is that you need to be “short and sweet” in your Tweets as you are limited to 280 characters. Using Twitter has taught me to become a more effective and efficient digital communicator with my business clients. You can also enhance your Tweets—and increase the likelihood they will be viewed—by adding some visual content in the form of emojis, pictures, videos, GIFs, etc….”




One way Congress should be analyzed. By members or issues.
The Language of Congress – What topics do members of Congress tweet about most frequently?
The Pudding – “We fed thousands of Congressional tweets to a machine learning algorithm in order to recognize political issues. We’ll keep doing this every day of the 116th congress, from January 3 2019 through January 3, 2021. These are the topics that dominate members
 of Congress public discoursethe issues they discuss (and don’t discuss). We’ve visualized data, for members of Congress, and issues.