Friday, November 01, 2019


Not good, if true.
67 per cent of industrial organizations do not report cybersecurity incidents
A recent Kaspersky survey has discovered that two-thirds (67 per cent) of industrial organizations do not report cybersecurity incidents to regulators.
Kaspersky’s State of Industrial Cybersecurity 2019 report shows that many companies are flouting reporting guidelines – perhaps to avoid regulatory punishments and public disclosure that can harm their reputation. In fact, respondents said that more than half (52 per cent) of incidents lead to a violation of regulatory requirements, while 63 per cent of them consider loss of customer confidence in the event of a breach as a major business concern.




A really useful tip. Grab this ebook!
Resources for Measuring Cybersecurity
Kathryn Waldron at R Street has collected all of the different resources and methodologies for measuring cybersecurity.




Words to inflame legislators?
Has Facebook Become Too Big to Fail?




Avoiding Skynet.
Defense Innovation Board unveils AI ethics principles for the Pentagon
The Defense Innovation Board, a panel of 16 prominent technologists advising the Pentagon, today voted to approve AI ethics principles for the Department of Defense. The report includes 12 recommendations for how the U.S. military can apply ethics in the future for both combat and non-combat AI systems. The principles are broken into five main principles: responsible, equitable, traceable, reliable, and governable.
The document titled “AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense” and an accompanying white paper will be shared on the Defense Innovation Board website, a DoD spokesperson told VentureBeat.




The assumption that AI must follow human thought patterns is probably an error.
We Shouldn’t be Scared by ‘Superintelligent A.I.’
The idea of artificial intelligence going awry resonates with human fears about technology. But current discussions of superhuman A.I. are plagued by flawed intuitions about the nature of intelligence.
We don’t need to go back all the way to Isaac Asimov — there are plenty of recent examples of this kind of fear. Take a recent Op-Ed essay in The New York Times and a new book, “Human Compatible,” by the computer scientist Stuart Russell.
The assumption seems to be that this A.I. could surpass the generality and flexibility of human intelligence while seamlessly retaining the speed, precision and programmability of a computer. This imagined machine would be far smarter than any human, far better at “general wisdom and social skills,” but at the same time it would preserve unfettered access to all of its mechanical capabilities. And as Dr. Russell’s example shows, it would lack humanlike common sense.
The problem with such forecasts is that they underestimate the complexity of general, human-level intelligence. Human intelligence is a strongly integrated system, one whose many attributes — including emotions, desires, and a strong sense of selfhood and autonomy — can’t easily be separated.




Conservative.
5 ways AI will evolve from algorithm to co-worker
Now that Siri and Alexa have moved from guest to family member at home, the next frontier for artificial intelligence-powered virtual assistants is the office.
KPMG's Traci Gusher thinks that these assistants will soon move out of the basic "What's the weather going to be?" phase to take on more work-specific tasks. In the next stage of artificial intelligence (AI) development, humans will be able to use virtual assistants as notetakers. These assistants will need coaching along the way just like any junior employee. Gusher predicts the technology will reach the ideal state of "virtual keepers of wisdom" by 2030. At that point, the virtual assistants will be able to track the news, figure out the relevance to a company's business, and then analyze existing contracts to spot any necessary changes or new advantages.




What other departments have vast stores of data?
DOE readies multibillion-dollar AI push
The U.S. Department of Energy (DOE) is planning a major initiative to use artificial intelligence to speed up scientific discoveries. At a meeting here last week, DOE officials said they will likely ask Congress for between $3 billion and $4 billion over 10 years, roughly the amount the agency is spending to build next-generation "exascale" supercomputers. But DOE has a unique asset: torrents of data. The agency funds atom smashers, large-scale surveys of the universe, and the sequencing of thousands of genomes. Algorithms trained with these data could help discover new materials or rare signals of new particles in the deluge of high energy physics data. But they face intense global competition to fund researchers and companies to lead what could be the next phase of the digital revolution.




Perspective. Everyone will be acquiring health tech.
Google to acquire Fitbit, valuing the smartwatch maker at about $2.1 billion




This is not new. But somehow we forget the obvious and need an occasional reminder.
How Tech CEOs Are Redefining the Top Job
… In 2017, John Chambers, then CEO of Cisco Systems, delivered a disquieting message to participants in Harvard Business School’s executive education program for CEOs. “A decade or two ago, CEOs could be in their offices with spreadsheets, executing on strategy,” he said. “Now, if you’re not out listening to the market and catching market transitions, … if you’re not understanding that you need to constantly reinvent yourself every three to five years, you as a CEO will not survive.”




For my spare time.



No comments: