Saturday, November 02, 2019


Will President Trump follow their lead?
Thailand unveils 'anti-fake news' center to police the internet
Thailand unveiled an “anti-fake news” center on Friday, the Southeast Asian country’s latest effort to exert government control over a sweeping range of online content.
Minister of Digital Economy and Society Puttipong Punnakanta broadly defined “fake news” as any viral online content that misleads people or damages the country’s image. He made no distinction between non-malicious false information and deliberate disinformation.
The center is not intended to be a tool to support the government or any individual,” Puttipong said on Friday before giving reporters a tour.




A reminder.
Texas Updates Data Breach Notification Requirements
Effective January 1, 2020, the Texas legislature will impose new notification requirements on businesses that maintain personal information of customers. House Bill 4390 amends the Texas Identity Theft Enforcement and Protection Act by requiring that Texas residents be notified of a data security breach within sixty (60) days of the determination that a breach has occurred.
The notification to the Texas Attorney General must include the following information:
    • A detailed description of the breach or the use of sensitive information acquired during the breach
    • The number of Texas residents affected
    • Measures taken to date regarding the breach
    • Any measures that will be taken in the future regarding the breach
    • An indication of whether law enforcement has been notified.




Every security manager should subscribe to this feed. (Assumes you have a complete inventory of software.)
US MS-ISAC Releases the October List of End of Support Software
The Multi-State Information Sharing and Analysis Center (MS-ISAC) of the Center for Internet Security has a released the October 2019 list of of software that is currently in or nearing end of support.
When software has reached end of support (EoS), it means the developers will no longer release fixes for any bugs that are found in the software. This includes fixes for security vulnerabilities that may be discovered.
As part of this mission, each month the MS-ISAC releases a monthly report detailing the list of software that is in or reaching end of support.




Think about isolated islands of Internet in Russia, China, the UK and others.
Cyberbalkanization and the Future of the Internet
On May 1, 2019, Russia's President Vladimir Putin signed into law what is generally known as the Sovereign Internet law. It came into effect on November 1, 2019, and is ostensibly designed as a defensive mechanism against any foreign attempts -- namely U.S. -- to harm the Russian internet by cutting access to foreign (non-Russian) servers.
In principle, the concept is relatively simple. Russia will establish its own shadow Russia-only DNS system. Under duress, or on-demand, Russian ISPs would be instructed to switch to the alternative DNS. This would ensure that all Russia-to-Russia communications never leave Russian territory, and a Russian national internet would be protected. Of course, it also means that all internal communication can be more easily intercepted, and that Russian citizens could be prevented from visiting selected websites in the rest of the world.




An opinion that follows you for a lifetime?
From the Road-to-Hell-is-Paved-with-Good-Intentions and What-Could-Possibly-Go-Wrong? departments, T. Keung Hui reports:
Some Wake County parents are refusing to give permission for teachers to conduct surveys that rate and track the behavioral health of their students.
The Wake County school system will have teachers at around 40 schools rate their students on 34 questions, such as how often they’ve appeared angry, expressed thoughts of hurting themselves, expressed strange or bizarre thoughts, appeared depressed or engaged in risk-taking behavior.
School officials say the Behavior Intervention Monitoring Assessment System, or BIMAS-2, will help them identify students who are at risk of future academic, behavior or emotional difficulties.
Read more on The News & Observer.
According to the publisher of BIMAS-2, a masters-level teacher can administer the system (as can some other specialties), but if I was a parent, I would opt my kid(s) out, as until schools do a much better job of securing data and protecting privacy, I would not want such data on file for my children.


(Related)
Caroline Haskins has a must-read article about Google’s Gaggle that is part of a BuzzFeed News package on schools and social media surveillance. This article begins:
For the 1,300 students of Santa Fe High School, participating in school life means producing a digital trail — homework assignments, essays, emails, pictures, creative writing, songs they’ve written, and chats with friends and classmates.
All of it is monitored by student surveillance service Gaggle, which promises to keep Santa Fe High School kids free from harm.
Santa Fe High, located in Santa Fe, Texas, is one of more than 1,400 schools that have taken Gaggle up on its promise to “stop tragedies with real-time content analysis.” It’s understandable why Santa Fe’s leaders might want such a service. In 2018, a shooter killed eight students and two teachers at the school. Its student body is now part of the 4.8 million US students that the for-profit “safety management” service monitors.
Read more on BuzzFeed,




Perspective.
Zack Whittaker reports:
Twitter says the number of government demands for user data are at a record high.
In its latest transparency report covering the six months between January and June, the social media giant said it received 7,300 demands for user data, up by 6% a year earlier, but that the number of accounts affected are down by 25%.
Read more on TechCrunch




Some points.
AI for good or evil? AI dangers, advantages and decisions
The main ways AI is being used for good today is for "predictive analytics, intelligence consolidation and to act as a trusted advisor that can respond automatically," FireEye's Muppidi said.
AI is already widely used for fraud -- including for operating botnets out of infected computers that work solely as internet traffic launderers, Tiffany said. But myriad other ways exist for AI to be harnessed.
A big, but sometimes overlooked, truth when it comes to the use of AI is that, unlike corporate America, cybercriminals don't have to care about or comply with the General Data Protection Regulation, privacy regulations -- or laws and regulations of any kind, for that matter. This allows them to do vastly more invasive data collection, according to Tiffany.
Right now, a lot of defensive security work isn't really about presenting an impregnable barrier to adversaries. Rather it's about creating a better barrier than other potential victims so that predators choose a different victim, Tiffany said. "A lot of security works like this: It's not about outrunning the bear; it's about outrunning the other people who are running from the bear."




Add an AI to nag you into eating healthy? Report your health metrics to your health insurance company?
What Google's Fitbit Buy Means for the Future of Wearables
When Fitbit launched its first product in 2009, the activity tracker didn’t even share data to a smartphone app. Instead, it wirelessly connected to a base station that had to be tethered to your computer. The clip-on itself displayed some information, but Fitbit’s website was where you’d find visualizations of your personal activity data. It was a kind of gateway drug to what would become our full-fledged, 2010’s, quantified-self addictions.
Over the years Fitbit would become known for its accessible hardware, but it was its software—its mobile app, social network, sleep tracking, subscription coaching—that made it stand out in an ocean of fitness wearables.
… “The tradeoff will be, ‘I don’t want one company knowing all of this about me,’ versus, ‘I can see the value,’” he says.




You too can have a warped view of reality!
What to Read, Watch, and Listen to In Preparation For the Robot Apocalypse
With 'Terminator: Dark Fate' out this weekend, we've rounded up the books, movies, and shows to prep you for the day we get terminated.



Friday, November 01, 2019


Not good, if true.
67 per cent of industrial organizations do not report cybersecurity incidents
A recent Kaspersky survey has discovered that two-thirds (67 per cent) of industrial organizations do not report cybersecurity incidents to regulators.
Kaspersky’s State of Industrial Cybersecurity 2019 report shows that many companies are flouting reporting guidelines – perhaps to avoid regulatory punishments and public disclosure that can harm their reputation. In fact, respondents said that more than half (52 per cent) of incidents lead to a violation of regulatory requirements, while 63 per cent of them consider loss of customer confidence in the event of a breach as a major business concern.




A really useful tip. Grab this ebook!
Resources for Measuring Cybersecurity
Kathryn Waldron at R Street has collected all of the different resources and methodologies for measuring cybersecurity.




Words to inflame legislators?
Has Facebook Become Too Big to Fail?




Avoiding Skynet.
Defense Innovation Board unveils AI ethics principles for the Pentagon
The Defense Innovation Board, a panel of 16 prominent technologists advising the Pentagon, today voted to approve AI ethics principles for the Department of Defense. The report includes 12 recommendations for how the U.S. military can apply ethics in the future for both combat and non-combat AI systems. The principles are broken into five main principles: responsible, equitable, traceable, reliable, and governable.
The document titled “AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense” and an accompanying white paper will be shared on the Defense Innovation Board website, a DoD spokesperson told VentureBeat.




The assumption that AI must follow human thought patterns is probably an error.
We Shouldn’t be Scared by ‘Superintelligent A.I.’
The idea of artificial intelligence going awry resonates with human fears about technology. But current discussions of superhuman A.I. are plagued by flawed intuitions about the nature of intelligence.
We don’t need to go back all the way to Isaac Asimov — there are plenty of recent examples of this kind of fear. Take a recent Op-Ed essay in The New York Times and a new book, “Human Compatible,” by the computer scientist Stuart Russell.
The assumption seems to be that this A.I. could surpass the generality and flexibility of human intelligence while seamlessly retaining the speed, precision and programmability of a computer. This imagined machine would be far smarter than any human, far better at “general wisdom and social skills,” but at the same time it would preserve unfettered access to all of its mechanical capabilities. And as Dr. Russell’s example shows, it would lack humanlike common sense.
The problem with such forecasts is that they underestimate the complexity of general, human-level intelligence. Human intelligence is a strongly integrated system, one whose many attributes — including emotions, desires, and a strong sense of selfhood and autonomy — can’t easily be separated.




Conservative.
5 ways AI will evolve from algorithm to co-worker
Now that Siri and Alexa have moved from guest to family member at home, the next frontier for artificial intelligence-powered virtual assistants is the office.
KPMG's Traci Gusher thinks that these assistants will soon move out of the basic "What's the weather going to be?" phase to take on more work-specific tasks. In the next stage of artificial intelligence (AI) development, humans will be able to use virtual assistants as notetakers. These assistants will need coaching along the way just like any junior employee. Gusher predicts the technology will reach the ideal state of "virtual keepers of wisdom" by 2030. At that point, the virtual assistants will be able to track the news, figure out the relevance to a company's business, and then analyze existing contracts to spot any necessary changes or new advantages.




What other departments have vast stores of data?
DOE readies multibillion-dollar AI push
The U.S. Department of Energy (DOE) is planning a major initiative to use artificial intelligence to speed up scientific discoveries. At a meeting here last week, DOE officials said they will likely ask Congress for between $3 billion and $4 billion over 10 years, roughly the amount the agency is spending to build next-generation "exascale" supercomputers. But DOE has a unique asset: torrents of data. The agency funds atom smashers, large-scale surveys of the universe, and the sequencing of thousands of genomes. Algorithms trained with these data could help discover new materials or rare signals of new particles in the deluge of high energy physics data. But they face intense global competition to fund researchers and companies to lead what could be the next phase of the digital revolution.




Perspective. Everyone will be acquiring health tech.
Google to acquire Fitbit, valuing the smartwatch maker at about $2.1 billion




This is not new. But somehow we forget the obvious and need an occasional reminder.
How Tech CEOs Are Redefining the Top Job
… In 2017, John Chambers, then CEO of Cisco Systems, delivered a disquieting message to participants in Harvard Business School’s executive education program for CEOs. “A decade or two ago, CEOs could be in their offices with spreadsheets, executing on strategy,” he said. “Now, if you’re not out listening to the market and catching market transitions, … if you’re not understanding that you need to constantly reinvent yourself every three to five years, you as a CEO will not survive.”




For my spare time.



Thursday, October 31, 2019


Does this question work for both people and AIs?
Software As a Profession
Choi, Bryan H., Software As a Profession (2019). Harvard Journal of Law & Technology, Vol. 33, 2020, Forthcoming. Available at SSRN: https://ssrn.com/abstract=3467613 – “When software kills, what is the legal responsibility of the software engineer? Discussions of software liability have avoided assessing the duties of “reasonable care” of those who write software for a living. Instead, courts and legal commentators have sought out other workarounds—like immunity, strict liability, or cyber insurance—that avoid the need to unpack the complexities of the software development process. As software harms grow more severe and attract greater scrutiny, prominent voices have called for software developers to be held to heightened duties of “professional care”—like doctors or lawyers. Yet, courts have long rejected those claims, denying software developers the title of “professional.” This discord points to a larger confusion within tort theory regarding the proper role of “professional care” relative to “reasonable care.”
This Article offers a reconceptualized theory of malpractice law that treats the professional designation as a standard of deference, not a standard of heightened duty. This new theoretical framework rests not on the virtues of the practitioner, but on the hazards of the practice. Despite best efforts, doctors will lose patients; lawyers will lose trials. Accordingly, the propriety of the practitioner’s efforts cannot be judged fairly under an ordinary negligence standard, which generates too many occasions to second-guess the practitioner’s performance. Instead, the professional malpractice doctrine substitutes a customary care standard for the reasonable care standard, and thereby allows the profession to rely on its own code of conduct as the measure of legal responsibility…”




A “security committee” makes sense.
Equifax Lawsuit Reveals Embarrassingly Lax Security Protections
In 2017, the Equifax data breach affecting over 147 million people in the United States, Canada and UK quickly made history as the first-ever “mega-breach.” Two years later, it still ranks as one of the worst data breach violations in history. Unfortunately, as the details of a new Equifax lawsuit reveal, there is a very strong likelihood the entire data breach could have been avoided in the first place if the company had adopted even the most basic security protocols.
If nothing else, the Equifax lawsuit – and all of the embarrassing security weaknesses that it is revealing – should be a wakeup call to C-suite executives and board members. If cybersecurity was not yet a board-level priority, it should be now. In the future, a mega-breach of the same scale might do more than just result in huge financial losses and damaging lawsuit claims – it might also end up with those same executives and board members headed to prison.




Using machines to catch errors made by machines is not as effective as using humans.
Mae Anderson reports:
Apple is resuming the use of humans to review Siri commands and dictation with the latest iPhone software update.
In August, Apple suspended the practice and apologized for the way it used people, rather than just machines, to review the audio.
While common in the tech industry, the practice undermined Apple’s attempts to position itself as a trusted steward of privacy.
Read more on APNews. I wonder how many people will read Apple’s notice about having a choice on this. According to the AP, you supposedly have a choice when installing the update iOS13.2:
Individuals can choose “Not Now” to decline audio storage and review. Users who enable this can turn it off later in the settings.
So I went and looked at my settings. I haven’t gotten that update yet, so when I do, I will look to see how that choice is presented.



Wednesday, October 30, 2019


A milestone, but probably not the end of this story.
Facebook agrees to pay Cambridge Analytica fine to UK
Facebook has agreed to pay a £500,000 fine imposed by the UK's data protection watchdog for its role in the Cambridge Analytica scandal.
It had originally appealed the penalty, causing the Information Commissioner's Office to pursue its own counter-appeal.
As part of the agreement, Facebook has made no admission of liability.




Perhaps North Korea will cross the line in another country before they push us too far. How would a country line India make war on North Korea?
Nuclear Power Plant in India Hit by North Korean Malware: Report
Reports of a breach at the Kudankulam Nuclear Power Plant located in the Indian state of Tamil Nadu emerged on Monday after a Twitter user posted a VirusTotal link pointing to what appeared to be a sample of a recently discovered piece of malware named Dtrack.
The malware was configured to use a hardcoded username and password combination that referenced KKNPP, the acronym for the Kudankulam Nuclear Power Plant.
India-based cybersecurity expert Pukhraj Singh reposted the tweet, revealing that attackers had gained domain controller-level access to the Kudankulam nuke plant and that other “extremely mission-critical targets” had also been hit.
Singh pointed to a tweet that he posted in early September, in which he said he had witnessed a “casus belli,” a Latin expression used to describe an event that is used to justify war. He later clarified that the other targets he had become aware of were even “scarier than KKNPP,” which is why he “went all hyperbolic about casus belli.”
However, some Indian officials have categorically denied that any kind of breach took place at the nuclear power plant. On the other hand, a statement from the Nuclear Power Corporation of India confirms that the plant was targeted by a cyberattack, but highlighted that control systems are not connected to the local network or the internet and claimed that an attack on the facility’s control systems “is not possible.” Singh also confirmed that there was no evidence of control systems being impacted.




A warning and a sales pitch?
Cyber attack on Asia ports could cost $110 billion: Lloyd's
A cyber attack on Asian ports could cost as much as $110 billion, or half the total global loss from natural catastrophes in 2018, a Lloyd’s of London-backed report said on Wednesday.
Cyber insurance is seen as a growth market by insurance providers such as Lloyd’s, which specializes in covering commercial risks, although take-up in Europe and Asia remains far behind levels in the United States.




It’s 3AM in Australia, do you know where your data is?
C.L.O.U.D.’s On the Horizon: How Law Enforcement Electronic Data Requests Are Going Global
Cybercrime often involves a crime in one country—a hack of a school teacher’s email account in the United Kingdom, for example—but the evidence of the crime often physically resides on servers in another country, such as malware and login records maintained by a social media or online company in California. However, law enforcement agencies investigating multi-country crimes are often bound by the geographic limits of their jurisdictions or must rely on slow diplomatic channels, such as mutual legal assistance treaties (MLATs), to request and obtain the evidence that they need. This slow process necessarily restricted the number of international requests received by U.S. companies.
The 2018 Clarifying Lawful Overseas Use of Data Act (CLOUD Act) authorizes the U.S. to enter into executive agreements with foreign governments to facilitate law enforcement access to cross-border data. The U.S. and the U.K. signed the first CLOUD Act Executive Agreement on October 3, 2019. Now, law enforcement agencies in either country can, according to the U.S. Department of Justice, “demand electronic evidence directly from tech companies based in the other country, without legal barriers.




The alternative would be to stop politicians from lying. And we all know that’s impossible.
This man is running for governor of California so he can run false Facebook ads
Facebook allows politicians, including candidates for public office, to run ads on its platform that are not fact-checked. That policy has drawn criticism from Democrats who say it will help President Trump's re-election campaign. Former Vice President Joe Biden's campaign wrote to Facebook asking the company to remove a false ad the Trump campaign ran about Biden and Ukraine earlier this month. Facebook denied Biden's request.




I would have thought this would be a condition for a license. Would it not also show demand?
Uber sues Los Angeles to keep scooter location data private
The ride-hailing company doesn't want to share everything with the city's government.
Los Angeles wants a peek at the location data collected by the Uber scooters in its city. The company, better known for its ride-hailing service, doesn't want to give up the information, and is taking legal action to keep the data private.
On Monday, Uber filed a lawsuit against Los Angeles after months of refusing to give the Department of Transportation access to its scooter location data. In September 2018, LADOT instituted a requirement for all scooter companies to provide location data on the vehicles. The city said it was for city planning purposes.




What architecture best supports AI?
Five Traits Of Artificial Intelligence Trailblazers
Artificial intelligence is a must-have in today’s economy. However, for the most part, it’s still not delivering business value in a profound way. Yet, everyone has high hopes.
That’s the word from a survey of 2,555 executives published by MIT Sloan Management Review and Boston Consulting Group, which finds those companies achieving success with AI are those that pay close attention – extremely close attention – to organizational factors.
A growing number of leaders view AI as not just an opportunity but also a strategic risk,” the study’s co-authors, led by Sam Ransbotham of Boston College, report. “’What if competitors, particularly unencumbered new entrants, figure out AI before we do?’”
Ransbotham and his co-authors identified five common traits that the AI winners exhibit:
  1. AI trailblazers “integrate their AI strategies with their overall business strategy.
  2. They “take on large, often risky, AI efforts that prioritize revenue growth over cost reduction.
  3. They “align the production of AI with the consumption of AI, through thoughtful alignment of business owners, process owners, and AI expertise to ensure that they adopt AI solutions effectively and pervasively.
  4. They “unify their AI initiatives with their larger business transformation efforts.
  5. They “invest in AI talent, data, and process change in addition to – and often more so than – AI technology. They recognize AI is not all about technology.”




Some interesting points. Very interesting graphics.
Gartner: The Present and Future of Artificial Intelligence
Artificial intelligence uses vast amounts of data and sophisticated probabilistic algorithms to offer "the intimacy of a small town in a big city scale," Gartner VP Svetlana Sicular said at the company's annual IT Symposium last week.
She said, "something is stalling AI adoption." (In another conversation with her, she said the biggest issue in AI is the lack of ideas.)
She shared a framework of how Gartner thinks organizations should consider AI projects in the short-, medium, and long-term. She said companies should plan on scaling volume, quality, and innovation in that order.
… in the short term, Sicular said people should implement what is easy to adopt and easy to measure.




I’ll ask my students if they have ever seen a floppy disk.
US nuclear forces have quietly kissed their floppy disks goodbye
For more than 50 years, the Defense Department has used 8-inch floppy disks to control the operational functions of the United States' nuclear arsenal — until now.



Tuesday, October 29, 2019


Similar to what Texas did. Interesting that the National Guard already has the skills. Oh wait. They don’t!
Ohio Establishes ‘Cyber Reserve’ to Combat Ransomware
The civilian unit of the National Guard will be on call to assist local governments that come under cyberattack.
Gov. Mike DeWine signed a bill into law Friday that establishes a volunteer “cyber reserve” of computer and information technology experts who will be able to assist local governments in the face of a ransomware or cybersecurity attacks.
The reserve will consist of five teams of 10 people spread throughout the state who will be vetted and trained to respond to cybersecurity emergencies affecting local governments. The response will be similar to the way the Ohio National Guard is placed on active duty during a natural disaster, said Maj. Gen. John C. Harris Jr., the Ohio Adjutant General who oversees the state’s National Guard.
Unlike the National Guard, the volunteer force would be comprised of civilians who could not be called up for active military duty. Members must vetted to join and the guard is currently accepting applications. Members would only be paid when deployed.




Not quite clarification (to a non-lawyer like me), but lots of detail.
These Cookies are Out of This World
European Court of Justice Planet 49 decision sets the record straight on consent for online cookies and trackers.
On October 1, 2019 the Court of Justice of the EU issued its much awaited decision in the Planet 49 case. The case dealt with the participation in an online lottery and what the consent for that should look like, in view of online cookies / trackers deployed in the website where the lottery was held.




Well gosh! If Harvard says so…
We Need AI That Is Explainable, Auditable, and Transparent
Just as we concern ourselves with who’s teaching our children, we also need to pay attention to who’s teaching our algorithms. Like humans, artificial intelligence systems learn from the environments they are exposed to and make decisions based on biases they develop. And like our children, we should expect our models to be able to explain their decisions as they develop.
As Cathy O’Neil explains in Weapons of Math Destruction, algorithms often determine what college we attend, if we get hired for a job, if we qualify for a loan to buy a house, and even who goes to prison and for how long. Unlike human decisions, these mathematical models are rarely questioned. They just show up on somebody’s computer screen and fates are determined.




We should see many attempts at an ethical approach.
Mozilla partners with Element AI to spearhead ethical artificial intelligence
As the technology continues to develop and grow, it is important to create -- and hopefully stick to -- applications that maintain some level of ethics (although what level, in turn, is debatable).
Element AI, an AI enterprise software provider that maintains existing partnerships with AWS, Microsoft, Nvidia, and Intel, will work with Mozilla to explore these aspects of ethical AI governance.
The companies will also work on "data trusts," a new, proposed technological solution born from AI to measure and maintain data control, which may become key as AI works its way into data collection solutions.
Data trusts are third-party stewardship models based on "common law trust." These tools, as documented in an Element AI whitepaper, are proposed as a way to give individuals more control over their personal information; to balance power and data rights between companies, governments, and individuals; to enhance privacy, and to give the public the opportunity to "share in the value of data and artificial intelligence."




Who should we trust?
Automatic braking can be life-saving (except when it's not), IIHS study finds
A new study released Tuesday by the Insurance Institute for Highway Safety ranks a majority of midsize cars as "superior" or "advanced" in their pedestrian crash prevention. But three models ranked as "basic," and three got "no credit" at all for their systems.
The IIHS study gauged the performance of these systems in the day. But a separate, recent report by AAA exposed major flaws in automatic emergency braking systems after dark.
"We found that at night the systems were completely ineffective," said Greg Brannon, AAA's director of automotive engineering.




The grocery wars continue.
Amazon axes $14.99 Amazon Fresh fee, making grocery delivery free for Prime members to boost use
Amazon is turning up the heat once again in the world of groceries, and specifically grocery delivery, to make its services more enticing in face of competition from Walmart, as well as a host of delivery companies like Postmates. Today, the company announced that it would make Amazon Fresh — the fresh food delivery service it now offers in some 2,000 cities in the US and elsewhere — free to use for Prime members, removing the $14.99/month fee that it was charging for the service up to now.
Alongside free delivery, Amazon is giving users one and two-hour delivery options for quicker turnarounds, and it’s making users’ local Whole Foods inventory available online and through the Amazon app.