Saturday, July 20, 2019

“We can hack it for you wholesale!” (Interesting anti-copying tech on this site. Try to copy the headline.)
Attack Steals Bank Passwords by Hijacking 180,000 Internet Routers in Brazil
Antivirus manufacturer Avast has published an alert for two attacks tampering with Internet router settings in Brazil. The change — made to at least 180,000 devices in the first half of 2019 alone — diverts access to certain sites to cloned pages, which then forwards any password entered to hackers.
Redirecting changes the destination of banking services and advertising material, as well as sending a cryptocurrency mining code to the victim’s browser.
The targets of the attack are domestic routers, such as those provided by operators and internet providers or acquired privately in the market to access the internet (see list of models below).

A much smaller hacking target.
Ed Dept: Hackers breached 62 colleges, created thousands of fake student profiles
The security flaw was found in previous versions of Banner software that colleges use to design web applications and authenticate users.
Hackers used the security flaw to take over users' sessions when they tried to log in and may have been able to access sensitive student data, according to the National Institute of Standards and Technology. The Ed Department noted on its website that the security breach may have also given hackers access to the agency's student financial aid data; it did not return a request for further comment.
It's not clear how many institutions are still using the older versions of the software, but more than 1,400 colleges use Banner for a variety of services, including for managing student information, employee benefits and financial aid.
An Ellucian spokesperson didn't say how or when the vulnerability was discovered. However, a GitHub post suggests a University of South Carolina student worker may have found and reported the issue to the company in December.

How much would adequate Computer Security have cost? How much will Directors pay?
Equifax reportedly close to $700 million data breach settlement
Remember that time Equifax had a data breach and leaked an incredible amount of information – addresses, social security numbers and even driver's licenses – on more than 143 million people in the US alone? That was revealed nearly two years ago, and tonight media reports suggest the company is closing in on a settlement with federal and state agencies including the FTC, Consumer Financial Protection Bureau and state attorneys general. The New York Times and Wall Street Journal reported it could pay between $650 and $700 million, near the $690 million figure Equifax told investors it had set aside for a penalty.
The Equifax breach came after hackers exploited a known flaw in unpatched software that its former CEO pinned on one employee instead of flawed policies. The data broker already agreed to new rules on security policies in some earlier settlements, and it remains to be seen if or how this will add additional oversight.

Are we finally getting serious about policing the Internet? (Probably not)
FTC approves settlement with Google over YouTube kids privacy violations
The Federal Trade Commission has finalized a settlement with Google in its investigation into YouTube for violating federal data privacy laws for children, said two people familiar with the matter who were not authorized to discuss it on record.
The settlement — backed by the agency’s three Republicans and opposed by its two Democrats — finds that Google inadequately protected kids who used its video-streaming service and improperly collected their data in breach of the Children’s Online Privacy Protection Act, or COPPA, which prohibits the tracking and targeting of users younger than 13, the people said.

I can see where lawyers might disagree.
Ill-Suited: Private Rights of Action and Privacy Claims
The U.S. Chamber of Commerce Institute for Legal Reform has published “Ill-Suited: Private Rights of Action and Privacy Claims,” a white paper authored by Hogan Lovells’ Mark W. Brennan, Alicia Paller, Melissa Bianchi, Adam Cooke, and Joseph Cavanaugh explaining why private litigation is a poor enforcement tool for privacy laws. As detailed in the paper, when it comes to privacy interests, “harms” are largely inchoate and intangible, and the wrongdoers are often unknown or unidentifiable. Even where class members may have suffered a concrete injury, the data indicates that they are unlikely to receive material compensatory or injunctive relief through private litigation. Meanwhile, plaintiffs’ counsel often walks away with millions of dollars, court dockets are unduly cluttered, and companies are forced to expend resources on baseless litigation.

This may relate to a couple of articles later in the blog…
Andis Robeznieks reports:
The Food and Drug Administration (FDA) has basic rules for regulating wearable devices and other digital health tools, but those rules may change as rapid innovation continues and the agency creates new pathways to ensure the safety and efficacy of new consumer-facing products. AMA experts outlined this and other need-to-know facts for physicians counseling patients who are increasingly looking to the wearable as a health tool.
Attorney Shannon Curtis, AMA assistant director for federal affairs, said during a recent education session that there are three important things for physicians to keep in mind when counseling patients about wearables or mobile health (mHealth) apps.
Be aware of an app or device’s regulatory status before recommending it to patients. […]
Alert patients to data privacy issues. […]
Help patients understand the information they receive. […]

I am delighted that they are advising physicians to alert patients to privacy issues.
Read more on the American Medical Association.

Not the best headline (no detailed timeline), but an interesting article.
The Twenty Year History Of AI At Amazon
If you’ve ever browsed through the vast selection of items Amazon offers on their website then you’ve most likely had an interaction with their advanced AI algorithms. Beginning with product recommendations, Amazon started using machine learning algorithms as part of their core offerings, and over time they have quietly built strong AI and ML capabilities broadly across the whole organization. There is no single AI group at Amazon. Rather, every team is responsible for finding ways to utilize AI and ML in their work. At the company’s recent re:MARS show in June 2019, Amazon showcased its wide footprint on use of AI & ML. At the event, the AI Today podcast interviewed three executives across various Amazon groups to hear how each group is utilizing AI.

(Related) Interesting. Imagine lawyers creating their own evidence alternative version of events.
DeepMind’s AI learns to generate realistic videos by watching YouTube clips
Perhaps you’ve heard of FaceApp, the mobile app that taps AI to transform selfies, or This Person Does Not Exist, which surfaces computer-generated photos of fictional people. But what about an algorithm whose videos are wholly novel? One of the newest papers from Google parent company Alphabet’s DeepMind (“Efficient Video Generation on Complex Datasets”) details recent advances in the budding field of AI clip generation. Thanks to “computationally efficient” components and techniques and a new custom-tailored data set, researchers say their best-performing model — Dual Video Discriminator GAN (DVD-GAN) — can generate coherent 256 x 256-pixel videos of “notable fidelity” up to 48 frames in length.

Yeah? How?
AI Weekly: A growing chorus of experts agrees facial recognition systems must be regulated
On Tuesday, Oakland became the third U.S. city after San Francisco and the Boston suburb of Somerville to ban facial recognition use by local government departments, including its police force. The ordinance adopted by the city council, which was written by Oakland’s Privacy Advisory Commission and sponsored by Councilmember Rebecca Kaplan, prohibits the city and its staff from obtaining, retaining, requesting, accessing, or using facial recognition technology or any information gleaned from it.
A September 2018 report revealed that IBM worked with the New York City Police Department to develop a system that allowed officials to search for people by skin color, hair color, gender, age, and various facial features. Elsewhere, the FBI and U.S. Immigration and Customs Enforcement are reportedly using facial recognition software to sift through millions of driver’s license photos, often without a court order or search warrant. And this past summer, Amazon seeded Rekognition, a cloud-based image analysis technology. to law enforcement in Orlando, Florida and the Washington County, Oregon Sheriff’s Office. The City of Orlando said this week it discontinued its Rekognition pilot, citing a lack of necessary equipment or bandwidth. But Washington County used Rekognition to build an app that lets deputies run scanned photos of suspected criminals through a database of 300,000 faces, which the Washington Post claims has “supercharged” police efforts in the state.

Your home gym as a Thing on the Internet of Things.
The future of fitness is together but alone
ne of the reasons the Peloton model has been so popular is due in part to society’s growing interest in self-care and wellness, with people looking to technology in the hopes of easily finding it. Self-improvement was the number one app theme last year, while the hashtag #selfcare soared from 5 million to 17 million posts on Instagram between August 2018 and July 2019. Now that people are used to finding self-care at the tap of a touchscreen, the convenience of connected fitness machines have also made them more attractive over the past few years, says Stephen Intille, an associate professor at Northeastern University specializing in health technology.

PLEASE tell me this is fake news! Babies are now a Thing on the Internet of Things?
Pampers introduces internet-connected diapers
Pampers is the latest company to jump into trendy, wearable devices with a new "connected care system" called Lumi that tracks babies' activity through a sensor that attaches to diapers.
The sensor sends an alert to an app notification when a diaper is wet. It also sends information on the baby's sleep and wake times and allows parents to manually track additional info, like dirty diapers and feeding times. A video monitor is included with the system and is integrated into the app. Pampers didn't say how much the system, which is launching in the U.S. this fall, will cost.
… The Lumi system encrypts all data and uses "the same standard of security as the financial services industry," [Will the FBI demand access? Bob] said Pampers spokeswoman Mandy Treeby. The system does not currently include two-factor authentication, something security experts consider key to avoiding unauthorized access to systems.
… The risk with so many ordinary objects becoming “smart” is that it makes them dependent on software updates and malfunctions - or a product losing its connectivity if a company goes out of business or discontinues the line. Nike’s $350 self-lacing shoes for instance stopped lacing earlier this year because of a software update.

Friday, July 19, 2019

Hey! Here’s my new bank account. Send all my payments here.”
BEC Scams Average $301 Million Per Month In Illegal Transfers
The frequency of business email compromise (BEC) scams has increased year over year and so did the value of attempted thefts, reaching a monthly average of more than $300 million.
The latest report from Internet Crime Report from FBI's Internet Crime Complaint Center (IC3) informs BEC scams were responsible for most of the losses generated by cybercrime.
Companies lost $1.2 billion to this sort of cybercriminal activity that aims to obtain funds by posing as a customer or upper management personnel in a company in order to trick key individuals in the organization into wiring funds to an attacker-control bank account.
Crooks have different tactics to attain their goal. In 2017 they used to impersonate company CEOs, which have sufficient authority to instruct individuals in charge of making payments to wire money to a specific account.
This approach dropped from 33% to 12% in 2018, indicating that fraudsters are adapting and looking for new ways to play their tricks.
Last year they seemed to prefer impersonating customers and vendors, and used fake invoices in an attempt to get paid.

No surprise. Telling Russian lies from politician lies ain’t easy. Knowing who buys an ad should be.
Google’s Tool to Tame Election Influence Has Flaws
Google set up a searchable database of political ads last summer, following calls for greater transparency in the wake of Russia’s interference in the 2016 presidential election.
Nearly a year later, the search giant’s archive of political ads is fraught with errors and delays, according to campaigns’ digital staffers and political consultants. The database, the Google Transparency Report, doesn’t always record political ads bought with Google’s ad tools and in some instances hasn’t updated for weeks at a time, they say.
Several campaigns, including those of Democratic presidential hopefuls Bernie Sanders and Elizabeth Warren, have run ads in recent weeks that didn’t appear in the Google archive, people familiar with the campaigns’ ad-buying said. Such mistakes have occurred for presidential and congressional candidates in both parties.

Good summary, again no real suggestions for change.
How Cyber Weapons Are Changing the Landscape of Modern Warfare
In the weeks before two Japanese and Norwegian oil tankers were attacked, on June 13th, in the Gulf of Oman—acts which the United States attributes to Iran—American military strategists were planning a cyberattack on critical parts of that country’s digital infrastructure. According to an officer involved, who asked to remain anonymous, as Iran ramped up its attacks on ships carrying oil through the Persian Gulf—four tankers had been mined in May—and the rhetoric of the national-security adviser, John Bolton, became increasingly bellicose, there was a request from the Joint Chiefs of Staff to “spin up cyber teams.” On June 20th, hours after a Global Hawk surveillance drone, costing more than a hundred million dollars, was destroyed over the Strait of Hormuz by an Iranian surface-to-air missile, the United States launched a cyberattack aimed at disabling Iran’s maritime operations. Then, in a notable departure from previous Administrations’ policies, U.S. government officials, through leaks that appear to have been strategic, alerted the world, in broad terms, to what the Americans had done.
… At Cyber Command, teams are assigned to specific adversaries—Iran, North Korea, Russia, and China, among them—and spend years working alongside the intelligence community to gain access to digital networks.

Would you sell out so cheaply?
What Amazon Thinks You’re Worth
Shoppers were offered a $10 credit in exchange for handing over their browser data. It’s an investment that pays dividends for Amazon.
… Amazon’s Prime Day bonanza came with an interesting deal: If users downloaded the Amazon Assistant app to their browser, they would receive a $10 credit.
The Amazon Assistant is a browser extension, shopping assistant, and recommendation tool, all rolled into one. Hover over an item while you’re shopping on another site, and the assistant will compare the item you’re looking at with a similar one available on Amazon. Of course, when Amazon has the cheaper deal, users will likely choose that one instead. But the assistant also allows Amazon access to users’ browser data: the URLs of the pages they visit, the search terms that brought them there, search results and metadata about those pages. Amazon offered the exchange last year as well, for a $5 credit.

Ah! Someone thinks there will be…
Life after artificial intelligence
Artificial intelligence stands to be the most radically transformative technology ever developed by the human race. As a former artificial intelligence entrepreneur turned investor, I spend a lot of time thinking about the future of this technology: where it’s taking us and how our lives are going to reform around it. We humans tend to develop emergent technologies to the nth degree, so I think there is a certain inevitability to the far-out techno-utopian visions from certain branches of science fiction — it just makes common sense to me and many others. Why shouldn’t AI change everything?
… At the risk of speaking in generalities, here’s how I forecast our weird, unknown future where AI is simultaneously very advanced and very mainstream. Things are going to be completely different from what we know today, but these changes are distinctly positive, not negative.

The Israeli firm behind software used to hack WhatsApp boasted that it can scrape data from Amazon, Apple, Facebook, Google, and Microsoft cloud servers
The company behind a WhatsApp hack has been boasting that it can break into the cloud services of big tech companies, including Amazon, Apple, Facebook, Google, and Microsoft, the Financial Times reports.
The Israeli security firm NSO group is infamous for its malware, Pegasus, which the FT said in May had been used to hack the phones of human rights activists using just a single WhatsApp call. The malware could make its way onto the target's phone, even if they didn't pick up.
Now NSO has been telling potential clients Pegasus has been developed to target cloud servers, according to people familiar with the sales pitch and documents shared with the FT. NSO reportedly said in its pitch that, by hacking into these servers, it could access someone's entire location data history, archived messages, and photos.
According to the sales documents viewed by the FT, the method involves copying authentication keys for services like Google Drive, Facebook Messenger and iCloud, from a targeted phone. Once this is done, a separate server can then impersonate the device without alerting the real owner.
The document said that even if the malware is removed from the device, attackers could still have unlimited access to data uploaded to the cloud, the FT reported.

Cool or criminal?
EVERY TIME YOU sign up for a free trial of any kind, you’re forced to take stock of your outlook on life. Realists accept that they’ll eventually wind up paying for this thing that is currently free. Pessimists understand this too, but are prematurely embittered even as they plug in their credit card numbers. Optimists assure themselves that they’ll keep track of when the trial ends and they’ll cancel before they are ever charged, if it turns out they don’t want to continue.
As of today, there is a more convenient way for you to cancel before ever being charged: a service called Free Trial Card. It's available now through the app DoNotPay, created by 22-year-old wunderkind coder and entrepreneur Joshua Browder.
The Free Trial Card is a virtual credit card you can use to sign up for free trials of any service anonymously, instead of using your real credit card. When the free trial period ends, the card automatically declines to be charged, thus ending your free trial. You don’t have to remember to cancel anything. If you want, the app will also send an actual legal notice of cancelation to the service.

An interesting homework challenge: What would you say to interest the President enough to get this response? Probably not an argument based on technology. (Did Microsoft really complain about Microsoft?)
Trump says he’s looking into a Pentagon cloud contract for Amazon or Microsoft because ‘we’re getting tremendous complaints’
… “We’re getting tremendous complaints from other companies,” Trump said in a press pool at the White House during a meeting with the prime minister of The Netherlands. “Some of the greatest companies in the world are complaining about it.” He named Microsoft, Oracle and IBM.
Since April, Microsoft and Amazon have been the only remaining competitors for the contract after IBM and Oracle were ruled out by the Defense Department. The contract, known as JEDI, is viewed as a marquee deal for the company that ultimately wins it, particularly as Microsoft and Amazon are aggressively pursuing government work for their expanding cloud units.

Something for all my students. (Because they don’t teach this in high school?)
Common Craft Explains How to Craft Clear Email Communication

Thursday, July 18, 2019

I wonder what percentage of targets are not covered by any security services?
Microsoft Reports Hundreds of Election-Related Cyber Probes
Microsoft says it has detected more than 740 infiltration attempts by nation-state actors in the past year targeting U.S.-based political parties, campaigns and other democracy-focused organizations including think tanks and other nonprofits.
A company spokeswoman would not name or further characterize the targets. All subscribe to Microsoft’s year-old AccountGuard service. It provides free cyberthreat detection to candidates, campaigns and other mostly election-related groups.
Microsoft did not say how many infiltration attempts were successful but noted in a blog post Wednesday that such targeting similarly occurred in the early stages of the 2016 and 2018 elections.

Is there a problem beyond, “My God! They’re Russians!” (Or are they concerned that Bernie Sanders doesn’t look so good in 20 years?)
DNC warns 2020 campaigns not to use FaceApp 'developed by Russians'
"It's not clear at this point what the privacy risks are, but what is clear is that the benefits of avoiding the app outweigh the risks," Lord continued.

Probably even worse better next year.
Lucas Ropek reports:
Though it was hailed as a potentially groundbreaking bill, the New York Privacy Act (NYPA) failed to materialize during the state’s most recent session. Had it done so, the bill would have introduced a regulatory framework that rivaled or potentially even surpassed that of the California Consumer Privacy Act (CCPA), the first major piece of data privacy legislation in the U.S.
Sen. Kevin Thomas introduced the bill earlier this year, quickly garnering a number of co-sponsors in the Senate, but failing to find any in the Assembly. The legislation received considerable media attention — with outlets calling it potentially “tougher,” “bolder” and more “sweeping” than legislation that had come before.
Read more on GovTech.

Put the blame where it belongs. (CEO does not mean Chief Ethical Officer… Should it?)
Want Responsible AI? Think Business Outcomes
The rising concern about how AI systems can embody ethical judgments and moral values are prompting the right questions. Too often, however, the answer seems to be to blame the technology or the technologists.
Delegating responsibility is not the answer.
Creating ethical and effective AI applications requires engagement from the entire C-suite. Getting it right is both a critical business question and a values’ statement that requires CEO leadership.

Interesting, but short on solutions.
How AI companies can avoid ethics washing
One of the essential phrases necessary to understand AI in 2019 has to be “ethics washing.” Put simply, ethics washing — also called “ethics theater” — is the practice of fabricating or exaggerating a company’s interest in equitable AI systems that work for everyone. A textbook example for tech giants is when a company promotes “AI for good” initiatives with one hand while selling surveillance capitalism tech to governments and corporate customers with the other. [Perhaps they don’t define “good” as I do. Bob]

Call for papers.
The National Security Commission on Artificial Intelligence, which we co-chair, is an independent federal commission helping the United States government determine what actions to take to ensure America’s national security enterprise has the tools it needs to maintain U.S. global leadership. The commission includes four working groups and three special projects. The working groups focus on maintaining U.S. global leadership in AI research, sustaining global leadership in national security AI applications, preparing the national security workforce for an AI future, and ensuring international cooperation and competitiveness in AI. The three special projects address ethics, data, and public-private partnerships. We will produce two reports to Congress, both intended to elevate awareness and to inform better legislation.
The commission speaks with diverse groups, but we want to have as wide an aperture as possible. We need to hear original, creative ideas that challenge the status quo, shake our assumptions, and will cause us to reconsider the arguments we’ve already heard and hear new arguments in a different light. As with previous War on the Rocks calls for articles, we want detailed, realistic papers from qualified voices, but welcome radical ideas and recommendations.

Perspective. Half way to Christmas?
Amazon declares Prime Day its biggest shopping event in history, surpassing the combined sales of Cyber Monday and Black Friday
Amazon said it sold with more than 175 million items sold over the 48-hour event, which started Monday. Last year, the event lasted 36 hours, during which Amazon sold about 100 million items.

Perspective. Internet on the couch?
Psychology of the Internet
People under the age of Twenty don’t know a world without the internet. On The Point, our panel of mental health experts talk about "cyberpsychology": the study of the human mind and behavior, and the impact of the culture of technology, like virtual reality and social media.

Wednesday, July 17, 2019

Perfect for my Security Compliance class.
The Essential Guide to Legislation
PoliticoPro – “During a single Congress, hundreds of bills are enacted into federal law – but the initial legislation proposed by lawmakers in the House and Senate can number well over 10,000 bills per session of Congress. With so much proposed legislation flowing through the standard processes, tracking can quickly become difficult. This guide breaks down each step of the legislation proposal process in the House and Senate, the steps that can result in changes to legislation before it becomes law, as well as how the two houses resolve legislative differences. A key difference in the legislative process between the two chambers is that majority leadership wields more legislative power in the House than in the Senate, where individual senators have more control throughout the process, especially on the floor.”
Click here to download the full guide.
Table of Contents:

Give us a few years and we’ll figure this out.
GDPR Compliance Since May 2018: A Continuing Challenge
Companies must automate and streamline, or the challenge of GDPR compliance will overwhelm them.
McKinsey research shows that few companies feel fully compliant: as many as half, feeling at least somewhat unprepared for GDPR, are using temporary controls and manual processes to ensure compliance until they can implement more permanent solutions. Broader organizational challenges persist as well – particularly honoring and protecting the rights of data subjects and ensuring that impact assessments, reporting of breaches, and audit organizations are functioning properly. With numerous stopgaps still in place, companies struggle to implement sustainable, long-term solutions.

Can we trust the antitrusters?
EU opens Amazon antitrust investigation
The EU’s Competition Commission has opened a formal antitrust investigation into Amazon to investigate whether the company is using sales data to gain an unfair advantage over smaller sellers on the Marketplace platform. The Commission says it will look into Amazon’s agreements with marketplace sellers, as well as how Amazon uses data to choose which retailer to link to using the “Buy Box” on its site. The announcement comes on the same day that Amazon announced changes to its third-party seller service agreement in response to a separate antitrust investigation by German regulators.

(Related) No doubt they will get to the bottom of that nagging question: How can you make money if Facebook is free?
Facebook Denies App Changes to Avoid Breakup: Antitrust Update
U.S. technology giants are headed for their biggest antitrust showdown with Congress in 20 years as lawmakers and regulators demand to know whether companies like Alphabet Inc.’s Google and Facebook Inc. use their dominance to squelch innovation. The House Judiciary antitrust subcommittee is holding a hearing Tuesday on the market power of the largest tech companies. Executives from Apple Inc., Inc., Google and Facebook are testifying. Here’s the latest from the committee room:

Perspective. It’s what companies are doing outside of Africa that caught my eye.
What do automation and artificial intelligence mean for Africa?
the latest round of technologies seems to be dealing Africa’s economic prospects a serious blow. Adidas, the German sporting goods company, has established “Speedfactories” in Ansbach in Germany and Atlanta in the U.S., that use computerized knitting, robotic cutting, and 3D printing to produce athletic footwear. Foxconn—the Taiwanese firm known for producing Apple and Samsung products in China’s Jiangsu province—recently replaced 60,000 factory workers with industrial robots. By reducing the importance of wage competitiveness, robots in “smart factories” can completely change what it takes for a place to be competitive in the global market for manufactures. If high-income economies are reshoring production, this could slow down and even reverse the migration of newcomers from Africa in global value chains.

Perspective. Since everyone now caries a portable device…
Education publisher Pearson to phase out print textbooks
The world's largest education publisher has taken the first step towards phasing out print books by making all its learning resources "digital first".
Pearson said students would only be able to rent physical textbooks from now on, and they would be updated much less frequently.
The British firm hopes the move will make more students buy its e-textbooks which are updated continually.
"We are now over the digital tipping point," boss John Fallon told the BBC.

A simple tool for creating “fake news.” Also a simple introduction to webpage coding?
See What's Behind Any Webpage With Mozilla's X-Ray Goggles
One of the topics that we talked about during the Practical Ed Tech Summer Camp was digital literacy and critical thinking. To that end, I presented Mozilla's X-Ray Goggles as a tool that can be used to create a modified version of real news story from legitimate sources. Mozilla's X-Ray Goggles lets you see the code behind any web page and change that code to display anything that you want in place of the original text and images. After you have made the changes you can publish a local copy of the web page.
Watch the following video that I created to learn how to use Mozilla's X-Ray Goggles.
Mozilla's X-Ray Goggles provides a good way for students to see how the code of a webpage works.

Tuesday, July 16, 2019

This week we are discussing HIPAA. Is GDPR worse?
Hospital fined €460,000 for privacy breaches after Barbie case
The Haga hospital in The Hague has been fined €460,000 for poor patient file security, after it emerged a tv reality soap star’s medical records had been accessed by dozens of unauthorised members of staff.
The Dutch privacy watchdog Authoriteit Persoonsgegevens said its research showed patient records at the hospital are still not properly secure.
The hospital gave 85 members of staff an official warning for looking at the medical files of Samantha de Jong, better known as Barbie, when she was hospitalised after a suicide attempt last year.
The members of staff were not involved in treating the tv reality star and were therefore not entitled to check her files, the hospital said.
Concerns about privacy have been one of the major brakes on developing a nationwide digital medical record system in the Netherlands.

Everything in war is very simple. But the simplest thing is difficult.” Carl von Clausewitz. Same with Computer Security.
How Small Mistakes Lead to Major Data Breaches
Four out of five of the top causes of data breaches are down to human or process error. In other words, human mistakes that could’ve been remedied with cybersecurity training or more careful consideration of security practices.

So far, no significant AI attack has been identified.
How can attackers abuse artificial intelligence?
findings and topics covered in the study include:
  • Adversaries will continue to learn how to compromise AI systems as the technology spreads
  • The number of ways attackers can manipulate the output of AI makes such attacks difficult to detect and harden against
  • Powers competing to develop better types of AI for offensive/defensive purposes may end up precipitating an “AI arms race”
  • Securing AI systems against attacks may cause ethical issues (for example, increased monitoring of activity may infringe on user privacy)
  • AI tools and models developed by advanced, well-resourced threat actors will eventually proliferate and become adopted by lower-skilled adversaries

Won’t you take the AI’s word for it?
Good luck deleting someone's private info from a trained neural network – it's likely to bork the whole thing
AI systems have weird memories. The machines desperately cling onto the data they’ve been trained on, making it difficult to delete bits of it. In fact, they often have to be completely retrained from scratch with the newer, smaller dataset.
That’s no good in an age where individuals can request their personal data be removed from company databases under privacy measures like the Europe's GDPR rules. How do you remove a person’s sensitive information from a machine learning that has already been trained? A 2017 research paper by law and policy academics hinted that it may even be impossible.

So what’s the answer?
How The Software Industry Must Marry Ethics With Artificial Intelligence
Intelligent, learning, autonomous machines are about to change the way we do business forever. But in a world where corporations or even executives may be liable in a civil or even criminal court for their decisions, who is responsible for decisions made by artificial intelligence (AI)?
In the United States, courts are already having to wrestle with this science fiction scenario after an Arizona woman was killed by an experimental autonomous Uber vehicle. The European Commission recently shared ethical guidelines, requiring AI to be transparent, have human oversight and be subject to privacy and data protection rules.
How can we, as Dr. Joanna Bryson points out, avoid being “manipulated into situations where corporations can limit their legal and tax liability just by fully automating their business processes?”

I keep looking for something I understand.
How to explain deep learning in plain English
… “For decades, in order to get computers to respond to our requests for information, we had to learn to speak to them in a way they would understand,” says Tom Wilde, CEO at Indico Data Solutions. “This meant having to learn things like boolean query language, or how to write complex rules that carefully instructed the computer what actions to take.
… “Deep learning’s arrival flips that [historical context] on its head,” Wilde says. “Now the computer says to us, you don’t need to worry about carefully constructing your request ahead of time – also known as programming – but rather provide a definition of the desired outcome and an example set of inputs, and the deep learning algorithm will backward solve the answer to your question. Now non-technical people can create complex requests without knowing any programming.”

Interesting law. Does this apply to any terminated employee?
Lyft broke the law when it failed to tell Chicago about a driver it kicked off its app. A month later he was accused of killing a taxi driver while working for Uber
Lyft could face penalties of up to $10,000 for failing to report an incident to Chicago authorities last year.
After deactivating driver Fungqi Lu in July 2018 after a fight with a local attorney, Lyft was required by law to alert the city's Department of Business Affairs and Consumer Protection. However, the Chicago Sun-Times reported on Monday that never happened.
Meanwhile, Lu continued to drive for Uber despite being kicked off the Lyft platform. (Many drivers work for multiple companies.) It was four weeks after the first incident when he was accused of fatally kicking a 64-year-old taxi driver, Anis Tungekar, in a heated traffic argument caught on video.
Earlier this year, the family of the late Tungekar filed a lawsuit against Uber, alleging that the company was negligent in its hiring of Lu and seeking $10 million in damages. Uber declined to comment on its policies for instances like this but passed along the following statement:
"This is a horrible tragedy and our thoughts are with Mr. Tungekar's family and loved ones," a spokesperson said. "As soon as we were made aware of this, we immediately removed this individual's access from the platform. [What are they talking about? Bob]