Saturday, April 15, 2017

An interesting approach to security.
Can AI and ML slay the healthcare ransomware dragon?
It’s common knowledge that healthcare organizations are prime – and relatively easy – targets for ransomware attacks.  So it is no surprise that those attacks have become rampant in the past several years.  The term “low-hanging fruit” is frequently invoked.
But according to at least one report, and some experts, it doesn’t have to be that way. ICIT – the Institute for Critical Infrastructure Technology – contends in a recent whitepaper that the power of artificial intelligence and machine learning (AI/ML) can “crush the health sector’s ransomware pandemic.”
   The AI/ML model, Bathurst said, doesn’t need specific signatures.  “It’s very good at answering questions like: ‘Is this file going to potentially harm my computer if it’s allowed to execute?’  It doesn’t need one-to-one matches with signatures,” he said.
It is obvious that the healthcare sector needs better security.  One of the reasons it is such a popular target is that, as the report notes, the victims are more likely to pay, since, “every second a critical system remains inaccessible risks the lives of patients and the reputation of the institution.  Hospitals whose patients suffer as a result of deficiencies in their cyber-hygiene are subject to immense fines and lawsuits.”
   AI/ML can be three times the cost of anti-virus solutions, he said, “and healthcare organizations are already fighting for every budget dollar they have.
“If the average cost of a ransomware attack is $300 – which was reported by the ICIT in 2016 – why would I spend tens of thousands of dollars more per year to prevent that risk?  I’d need 30 or 40 successful attacks before the cost makes sense.”

(Related)
   In fact, although we saw examples of companies using AI in computer-to-computer transactions such as in recommendation engines that suggest what a customer should buy next or when conducting online securities trading and media buying, we saw that IT was one of the largest adopters of AI.  And it wasn’t just to detect a hacker’s moves in the data center.  IT was using AI to resolve employees’ tech support problems, automate the work of putting new systems or enhancements into production, and make sure employees used technology from approved vendors.  Between 34% and 44% of global companies surveyed are using AI in in their IT departments in these four ways, monitoring huge volumes of machine-to-machine activities.
In stark contrast, very few of the companies we surveyed were using AI to eliminate jobs altogether.  For example, only 2% are using artificial intelligence to monitor internal legal compliance, and only 3% to detect procurement fraud (e.g., bribes and kickbacks).
What about the automation of the production line?  Whether assembling automobiles or insurance policies, only 7% of manufacturing and service companies are using AI to automate production activities.  Similarly, only 8% are using AI to allocate budgets across the company. Just 6% are using AI in pricing.


Making crime safe?  Social media for the anti-social? 
Move over darknet, WhatsApp is where India’s new digital black market is at
   groups offering items, mostly electronics, bought from ecommerce sites using stolen credit card details at heavily discounted prices.
Till some time ago, it was just the deep web or the darknet — which not everyone knows about and which is not easy to access — where contraband, porn, fake IDs, credit card details and other hacked user data were sold.
By moving to the chatting app, such illegal trade is becoming mainstream, allowing cybercriminals to reach out to India’s huge userbase of 200 million WhatsApp users.
What’s more, traders can be brazen in their dealings as they have no fear of being caught.  Data privacy laws and WhatsApp’s encryption policy make it next to impossible for cybercrime authorities to track such black markets.  The fact that most users on these groups sign up with virtual numbers — use-and-throw proxy numbers that can be generated using apps — makes it even more difficult.  India’s national encryption policy draft excludes WhatsApp users from the mandate of keeping a 90-day record of all their encrypted communications.


Will we even recognize when an AI makes a bad decision? 
The Dark Secret at the Heart of AI
No one really knows how the most advanced algorithms do what they do.  That could be a problem.
Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey.  The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence.  The car didn’t follow a single instruction provided by an engineer or programmer.  Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.
Getting a car to drive this way was an impressive feat.  But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions.  Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems.  The result seems to match the responses you’d expect from a human driver.  But what if one day it did something unexpected—crashed into a tree, or sat at a green light?  As things stand now, it might be difficult to find out why.  The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action.  And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did. [Not sure I agree with this.  Bob] 
   In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records.  This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on.  The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease.  Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver.  There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”
At the same time, Deep Patient is a bit puzzling.  It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well.  But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible.  He still doesn’t know.  The new tool offers no clue as to how it does this.  If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed.  “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”


For my cable-cutting students.
   With those unfamiliar with Kodi, you can run it on your desktop, install the Android version from the Play Store, or even follow a workaround to get it running on your iOS gadget.
As for Kodi boxes, they are becoming increasingly common as people look to slash their cable bill or cut the cord completely.
   Formerly known as XMBC, Kodi is a free-to-use open source media player.  It acts as a single centralized hub for all your locally-saved entertainment.  It also lets you watch live TV thanks to its support for most well-known back-ends, including MediaPortal, MythTV, NextPVR, Tvheadend, and VDR.
   Is Kodi Illegal?
The answer is a resounding No.  Kodi is not illegal now and will almost certainly never become illegal in the future.
In simple terms, Kodi is nothing more than a media app.  When you install it on your device, it’s empty.  It’s nothing more than a shell waiting for you, the user, to populate it with content.  No add-ons come pre-packaged, and even if they did, there is no way the developers would release the app with the illegal ones baked in.
Kodi even has an official repository for add-ons.  Every single one of the add-ons you will find in it are entirely legal to download and use in every jurisdiction.


Many of my students use WhatsApp.
Using WhatsApp as a Private Store for your Documents and Notes
WhatsApp is more than just a messaging app.  Use the app to quick transfer files between computer and phone.  Or make it a private storehouse for your notes, voice memos, documents and more.

No comments: