Saturday, December 07, 2019


A bit late, don’t you think?
The Federal Trade Commission issued an Opinion finding that the data analytics and consulting company Cambridge Analytica, LLC engaged in deceptive practices to harvest personal information from tens of millions of Facebook users for voter profiling and targeting. The Opinion also found that Cambridge Analytica engaged in deceptive practices relating to its participation in the EU-U.S. Privacy Shield framework.
In an administrative complaint filed in July, FTC staff alleged that Cambridge Analytica and its then-CEO Alexander Nix and app developer Aleksandr Kogan deceived consumers. Nix and Kogan agreed to settle the FTC’s allegations. Cambridge Analytica, which filed for bankruptcy in 2018, did not respond to the complaint filed by FTC staff, or a motion submitted for summary judgment of the allegations.
The FTC staff’s administrative complaint alleged that Kogan worked with Nix and Cambridge Analytica to enable Kogan’s GSRApp to collect Facebook data from app users and their Facebook friends. The complaint alleged that app users were falsely told the app would not collect users’ names or other identifiable information. The GSRApp, however, collected users’ Facebook User ID, which connects individuals to their Facebook profiles.
The complaint also alleged that Cambridge Analytica claimed it participated in the EU-U.S. Privacy Shield —which allows companies to transfer consumer data legally from European Union countries to the United States—after allowing its certification to lapse. In addition, the complaint alleged the company failed to adhere to the Privacy Shield requirement that companies that cease participation in the Privacy Shield affirm to the Department of Commerce, which maintains the list of Privacy Shield participants, that they will continue to apply the Privacy Shield protections to personal information collected while participating in the program.
In its Opinion, the Commission found that Cambridge Analytica violated the FTC Act through the deceptive conduct alleged in the complaint. The Final Order prohibits Cambridge Analytica from making misrepresentations about the extent to which it protects the privacy and confidentiality of personal information, as well as its participation in the EU-U.S. Privacy Shield framework and other similar regulatory or standard-setting organizations. In addition, the company is required to continue to apply Privacy Shield protections to personal information it collected while participating in the program (or to provide other protections authorized by law), or return or delete the information. It also must delete the personal information that it collected through the GSRApp.




Will others pile on? Apparently not only US Senators (Orrin Hatch) are fooled.
Facebook Gets $4m Fine From Hungary for Claim Services Are Free
Hungary’s competition watchdog handed Facebook Inc. a 1.2 billion forint ($4 million) fine for claiming its services were free.
Facebook made a profit from utilizing users’ online activity and data, which served as “payment” for the services, the Budapest-based authority said in an emailed statement on Friday. Claiming the website was free may have misled users regarding the value of the data they were giving the technology firm, it said. A Facebook spokesman was not immediately available for comment.




Interesting. Was this entirely a “Gee, that sounds good. Let’s try it.” kind of thing?
Social Media Vetting of Visa Applicants Violates the First Amendment
Since May, the State Department has required almost everyone applying for a U.S. visa—more than 14 million people each year—to register every social media handle they’ve used over the past five years on any of 20 platforms, including Facebook, Instagram, Twitter, and YouTube. The information collected through the new registration requirement is then retained indefinitely, shared widely within the federal bureaucracy as well as with state and local governments, and, in some contexts, even disseminated to foreign governments. The registration requirement chills the free speech of millions of prospective visitors to the United States, to their detriment and to ours.
On Thursday, on behalf of two U.S.-based documentary film organizations, the Knight First Amendment Institute and the Brennan Center for Justice sued to stop this policy, arguing that it violates the First Amendment as well as the Administrative Procedure Act.
There is no evidence that the social media registration requirement serves the government’s professed goals. Despite the State Department’s bare assertion that collecting social media information will “strengthen” the processes for “vetting applicants and confirming their identity,” the government has failed in numerous attempts—to show that social media screening is even effective as a visa-vetting or national security tool.




Do they see many of these hacks? Not clear from the article or the FBI notice.
If You Have an Amazon Echo or Google Home, the FBI Has Some Urgent Advice for You
… The FBI puts it like this:
Hackers can use that innocent device to do a virtual drive-by of your digital life. Unsecured devices can allow hackers a path into your router, giving the bad guy access to verything else on your home network that you thought was secure. Are private pictures nd passwords safely stored on your computer? Don't be so sure.
  • Change the device's factory settings from the default password. A simple Internet search should tell you how -- and if you can't find the information, consider moving on to another product.
  • Many connected devices are supported by mobile apps on your phone. These apps could be running in the background and using default permissions that you never realized you approved. Know what kind of personal information those apps are collecting and say "no" to privilege requests that don't make sense.
  • Secure your network. Your fridge and your laptop should not be on the same network. Keep your most private, sensitive data on a separate system from your other IoT devices.




Obvious? My students think so.
General Counsel Must Come to Grips With Artificial Intelligence
Artificial intelligence is evolving and exposing companies to new areas of liability and regulatory minefields, which means it’s high time for general counsel to get comfortable with AI if they want to avoid costly compliance missteps.
That’s the takeaway from a report that Lex Mundi released on Thursday. The report is based on a workshop discussion that Lex Mundi, a Houston-based global network of independent law firms, hosted earlier this year in Amsterdam.
Alexander Birnstiel, a Brussels-based partner at Noerr who contributed to the report, noted that “companies and their in-house legal teams must navigate an environment characterized by a patchwork of new competition enforcement initiatives and regulatory rules across jurisdictions whenever they engage in digital business.”
Several of the conference participants also suggested that corporate boards include cyber experts, who can serve as a “valuable ally to a general counsel.”
Government agencies, including the Securities and Exchange Commission and the Australian Securities and Investment Commission, are already using AI to detect misconduct. At the same time, general counsel should be pushing companies to leverage the technology to identify potential regulatory issues before they become serious problems.


(Related)
AI, Machine Learning and Robotics: Privacy, Security Issues
… "we're beginning to see surgical robots ... and robots that take supplies from one part of a hospital to another. … You can use AI to help sequence a child's DNA ... and match and identify a condition in very short order," Wu says in an interview with Information Security Media Group.
But along with those bold technological advances come emerging privacy and security concerns.
"The HIPAA Security Rule doesn't talk about surgical robots and AI systems," he notes. Nevertheless, HIPAA's administrative, physical and technical safeguard requirements still apply, he says.
As a result, organizations must determine, for example, "what kind of security management procedures are touching these devices and systems - and do you have oversight over them?"
Also critical is ensuring that "communications are secure from one point to another," he points out. "If you have an AI system that's drawing records from an electronic health record, how is that transmission being secured? How do we know the AI systems drawing [information] from the EHR system has been properly authenticated?"




If you are a regular reader of this Blog, you know what these parents were concerned about.
Lois Beckett reports:
Parents at a public school district in Maryland have won a major victory for student privacy: tech companies that work with the school district now have to purge the data they have collected on students once a year. Experts say the district’s “Data Deletion Week” may be the first of its kind in the country.
It’s not exactly an accident that schools in Montgomery county, in the suburbs of Washington DC, are leading the way on privacy protections for kids. The large school district is near the headquarters of the National Security Agency and the Central Intelligence Agency. It’s a place where many federal employees, lawyers and security experts send their own kids.
Read more on The Guardian.



Friday, December 06, 2019


Tools for hackers.
Easily Reveal Hidden Passwords In Any Browser
lifehacker – “Autofill is a great setting if you don’t want to have to remember and type in your password every time you log in to an online account. In fact, we highly recommend you use a password manager (and take advantage of autofill features) to keep track of secure passwords. But autofill makes it easy to forget what your passwords are in the event you need to type them in elsewhere. Thankfully, there’s a way around this. [Take time to read the Comments section for additional useful information – and once again – try DuckDuckGo rather than Chrome]


(Related) Delete does not mean delete.




A ‘complication’ my Security students must consider.
How to fool infosec wonks into pinning a cyber attack on China, Russia, Iran, whomever
Learning points, not an instruction manual
"I can buy infrastructure in Iran very easily, it turns out," he said. "That's not 26 servers; that's 26 different VPS providers that, with a credit card or Bitcoin, I can go ahead and buy servers in Iran that I can send traffic through. It's going to be awesome!"




Help to justify that security budget…
The Drums of Cyberwar
In mid-October, a cybersecurity researcher in the Netherlands demonstrated, online, as a warning,* the easy availability of the Internet protocol address and open, unsecured access points of the industrial control system—the ICS—of a wastewater treatment plant not far from my home in Vermont. Industrial control systems may sound inconsequential, but as the investigative journalist Andy Greenberg illustrates persuasively in Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin’s Most Dangerous Hackers, they have become the preferred target of malicious actors aiming to undermine civil society. A wastewater plant, for example, removes contaminants from the water supply; if its controls were to be compromised, public health would be, too.
That Vermont water treatment plant’s industrial control system is just one of 26,000 ICS’s across the United States, identified and mapped by the Dutch researcher, whose Internet configurations leave them susceptible to hacking.




Local
CO: Sunrise Community Health Notifies Patients of Data Security Incident
Sunrise Community Health in Colorado has posted a notice concerning a recent data security incident. From their notice:
Sunrise recently learned certain employee email accounts were accessed by an unauthorized individual(s). On November 5, 2019, it was determined that certain personal information was present in the affected email accounts. Sunrise began working with third party forensic experts to confirm the full nature and scope of this incident and to confirm the security of the Sunrise email environment. The investigation is ongoing at this time. To date, the investigation has determined certain Sunrise email accounts may have been subject to unauthorized access at various times between September 11, 2019 and November 22, 2019.
The complete can be found here or on their website.




There must have been much more criticism than normal. Just a reaction to Face recognition?
After criticism, Homeland Security drops plans to expand airport face recognition scans to US citizens




Could you (or your AI) explain it to your grandma?
UK ICO and The Alan Turing Institute Issue Draft Guidance on Explaining Decisions Made by AI
The UK’s Information Commissioner’s Office (“ICO”) has issued and is consulting on draft guidance about explaining decisions made by AI. The ICO prepared the guidance with The Alan Turing Institute, which is the UK’s national institute for data science and artificial intelligence. Among other things, the guidance sets out key principles to follow and steps to take when explaining AI-assisted decisions — including in relation to different types of AI algorithms — and the policies and procedures that organizations should consider putting in place.




Huh. Another worry to worry about.
Are Businesses Ready for Deglobalization?
As we enter a new decade, characterized by rising economic complexity and geopolitical divisions — U.S.-China tensions, populism and nationalism in Europe, and the looming risk of a global recession — forward-thinking business leaders are developing strategies to mitigate the longer-term risk of deglobalization. They are concerned about trade protectionism, and the revenue a company could lose in any tariff wars.
However, there is a more hidden risk associated with deglobalization: that global corporations are not structured in a way that is fit for purpose to compete in a deglobalizing world. It is increasingly understood that this ever-more siloed world directly impacts three key pillars of global corporations: technology, global recruiting, and the finance function.




Perspective. (Lowest score is 12/25)
50 countries ranked by how they’re collecting biometric data and what they’re doing with it
comparitech: “From passport photos to accessing bank accounts with fingerprints, the use of biometrics is growing at an exponential rate. And while using your fingerprint may be easier than typing in a password, just how far is too far when it comes to biometric use, and what’s happening to your biometric data once it’s collected, especially where governments are concerned? Here at Comparitech, we’ve analyzed 50 different countries to find out where biometrics are being taken, what they’re being taken for, and how they’re being stored. While there is huge scope for biometric data collection, we have taken 5 key areas that apply to most countries (so as to offer a fair country-by-country comparison and to ensure the data is available). Each country has been scored out of 25, with high scores indicating extensive and invasive use of biometrics and/or surveillance and a low score demonstrating better restrictions and regulations regarding biometric use and surveillance…” [Spoiler – U.S. ranks #4 of top 5 countries using biometric data]




Because I like lists.
Boing Boing’s 28 favorite books in 2019
boing boing Rob Beschizza – “Here’s 28 of our favorites from the last year – not all of them published in the last year, mind you – from fairy-tales to furious politics and everything in between, including the furious fairy-tale politics getting between everything. The links here include Amazon Affiliate codes; this helps us make ends meet at Boing Boing, the world’s greatest neurozine…” [Each “favorite” or “best books” list offers unique insights on books that you may have missed – like Coders: The Making of a New Tribe and the Remaking of the World.



Thursday, December 05, 2019


No doubt these machines were each “tested and certified.”
The electronic votes said he lost in a statistically impossible landslide, but the paper ballots said he won
Vote totals in a Northampton County judge’s race showed one candidate, Abe Kassis, a Democrat, had just 164 votes out of 55,000 ballots across more than 100 precincts. Some machines reported zero votes for him. In a county with the ability to vote for a straight-party ticket, one candidate’s zero votes was a near statistical impossibility. Something had gone quite wrong.
When officials counted the paper backup ballots generated by the same machines, they realized Kassis had narrowly won.
It is still unknown what caused the problem since the machines "are locked away for 20 days after an election according to state law." However, suspicions include a bug in the software, as well as a fundamentally flawed design.




You don’t ask for evidence when you believe you “have to do something!”
Schools Spy on Kids to Prevent Shootings, But There's No Evidence It Works




Your Holiday reading?
Privacy Papers 2019
THE WINNERS OF THE 2019 PRIVACY PAPERS FOR POLICYMAKERS (PPPM) AWARD ARE:




Some companies are just massive fines waiting to happen.
Getting Cookie Consent Right
One could be forgiven for thinking that knowing how to comply with a legal obligation that has been in place for nearly a decade would be clear cut. However, widespread practice tells us that this is far from the truth.




Too simple? What would it take for everyone to agree?
DARPA SCIENTIST: ENGINEERS MUST STOP MAKING AUTONOMOUS WEAPONS
If we really want to prevent the rise of autonomous weapons — killer robots that can pull the trigger without needing a human’s approval — then engineers will actually need to stop working towards them.
So argues Christoffer Heckman, a University of Colorado Boulder computer scientist who’s funded by DARPA, the Pentagon’s research division, in an essay in The Conversation It may sound like an obvious solution, but Heckman points out that it’s sometimes hard for researchers to predict how their work might get used or abused in the future.




The ‘Double Secret Probation’ debate.
Justices debate allowing state law to be “hidden behind a pay wall”
Ars Technica – “The courts have long held that laws can’t be copyrighted. But if the state mixes the text of the law together with supporting information, things get trickier. In Monday oral arguments, the US Supreme Court wrestled with the copyright status of Georgia’s official legal code, which includes annotations written by LexisNexis. The defendant in the case is Public.Resource.Org (PRO), a non-profit organization that publishes public-domain legal materials. The group obtained Georgia’s official version of state law, known as the Official Code of Georgia Annotated, and published the code on its website. The state of Georgia sued, arguing that while the law itself is in the public domain, the accompanying annotations are copyrighted works that can’t be published by anyone except LexisNexis. Georgia won at the trial court level, but PRO won at the appeals court level. On Monday, the case reached the US Supreme Court. “Why would we allow the official law to be hidden behind a pay wall?” asked Justice Neil Gorsuch. Georgia’s lawyer countered that the law wasn’t hidden behind a paywall—at least not the legally binding parts. LexisNexis offers a free version of Georgia’s code, sans annotations, on its website. But that version isn’t the official code. LexisNexis’ terms of service explicitly warns users that it might be inaccurate. The company also prohibits users from scraping the site’s content. If you want to own the latest official version of the state code, you have to pay LexisNexis hundreds of dollars. And if you want to publish your own copy of Georgia’s official code, you’re out of luck…”




Tools for Big Data?
Netflix: Our Metaflow Python library for faster data science is now open source
The video-streaming giant uses machine learning across all aspects of its business, from screenplay analysis, to optimizing production schedules, predicting churn, pricing, translation, and optimizing its giant content distribution network.
According to Netflix software engineers, Metaflow was built to help boost the productivity of its data scientists who like to express business logic through Python code but don't want to spend too much time thinking about engineering issues, such as object hierarchies, packaging issues, or dealing with obscure APIs unrelated to their work.
Netflix offers this nutshell description of its Python library on the new metaflow.org website: "Metaflow helps you design your workflow, run it at scale, and deploy it to production. It versions and tracks all your experiments and data automatically. It allows you to inspect results easily in notebooks."



Wednesday, December 04, 2019


A really interesting article about a really interesting question.
Merck cyberattack’s $1.3 billion question: Was it an act of war?
NotPetya’s impact on Merck that day — June 27, 2017 — and for weeks afterward was devastating.
In all, the attack crippled more than 30,000 laptop and desktop computers at the global drugmaker, as well as 7,500 servers, according to a person familiar with the matter. Sales, manufacturing, and research units were all hit. One researcher told a colleague she’d lost 15 years of work. Near Dellapena’s suburban office, a manufacturing facility that supplies vaccines for the U.S. market had ground to a halt. “For two weeks, there was nothing being done,” Dellapena recalls. “Merck is huge. It seemed crazy that something like this could happen.”
As it turned out, NotPetya’s real targets were half a world away, in Ukraine, which has been in heightened conflict with Russia since 2014. In the former Soviet republic, the malware rocketed through government agencies, banks, power stations — even the Chernobyl radiation monitoring system. Merck was apparently collateral damage.
Merck did what any of us would do when facing a disaster: It turned to its insurers. After all, through its property policies, the company was covered — after a $150 million deductible — to the tune of $1.75 billion for catastrophic risks including the destruction of computer data, coding, and software. So it was stunned when most of its 30 insurers and reinsurers denied coverage under those policies. Why? Because Merck’s property policies specifically excluded another class of risk: an act of war.
In early 2020, experts will testify behind closed doors as to what constitutes an act of war in the cyber age. The case could be settled at some point — or it could drag on for years before going to trial.
The challenge for insurers is to show that NotPetya was an act of war even though there’s no clear definition in U.S. law on what that means in the cyber age.


(Related)
When do cyberattacks deserve a response from NATO?
… These attacks been a concern within the United States as well, which has lead to new approaches that involve daily engagement in cyberspace as a way to confront or delay these events.
… “States have a huge responsibility to talk about their understanding of international law … That’s how you create the understanding of what it would be that would facilitate answering those questions,” she said.
As an example, Jordan mentioned the position taken by the UK attorney general, who acknowledged in May 2018 that a cyber operation, no matter how hostile, never violates sovereignty. On the other hand, the French outlined a stance in September 2019 that remote cyber operations that cause effects are, indeed, a violation of sovereignty.
The United States has yet to officially state an opinion on this subject.




Just regular, everybody does it, espionage?
North Korea Hackers Breached Indian Nuke Reactor In Search For Advanced Thorium Technology
North Korea is trying to get its hands on advanced nuclear technology at any cost. One of India’s largest nuclear plants, the Kudankulam, located in the southern state of Tamil Nadu was recently attacked by North Korean hackers.




Privacy theater?
Portland, Oregon, aims to ban the use of the controversial technology not only by city government, but also by private companies.




The impact of CCPA. No need to block personalized ads if personal data was never collected.
Google Will Enable Websites to Block Personalized Ads Under CCPA
With just weeks to go until the California Consumer Privacy Act (CCPA) goes into effect in January 2020, Internet companies such as Google are already taking early, proactive steps to ensure that they will be in full compliance. At the end of November, Google announced that it would enable websites and apps to block personalized ads as part of its CCPA compliance efforts. This new law is similar to the European General Data Protection Regulation (GDPR) in that it requires companies give customers the right to opt-out of personal data collection. Since personalized ads require detailed information that has been collected from a user’s personal profile in order to be targeted effectively, it is easy to see why these ads would be covered under the new CCPA.




Some tips for building a Best Practices approach.
Talend Report Showcases Low GDPR Compliance Rates for Data Subject Access Requests
More than 18 months after the European General Data Protection (GDPR) went into effect, companies and public sector organizations worldwide are still having a very difficult time complying with a key GDPR provision that requires them to respond to any Data Subject Access Request (DSAR) in less than a month. In fact, Talend’s new survey shows that less than half (42%) of all companies and public sector organizations were able to respond to a Data Subject Access Request within the stipulated time period.




Futurist perspective.
From algae to AI, the 12 themes experts predict will shape the world in 50 years
Here are the 12 main themes that emerged:




Specific sites my students should avoid. (Wink, wink)



Tuesday, December 03, 2019


A friend once told me that the fastest way to get rich would be to invent a new sin. New hacking techniques work kind of like that.
New Experian Data Breach Trends Report Highlights New Risks For 2020
With every passing year, hackers are becoming more sophisticated not just in the technologies that they use to carry out their attacks, but also in ways that they spot potential new attack surfaces. That’s one of the big takeaway trends from Experian’s seventh annual “Data Breach Industry Forecast 2020,” which outlines five key data breach trends to keep an eye on over the next 12 months.
At the top of the list of new trends is text-based “smishing” attacks, in which nefarious hackers use SMS text messages to carry out phishing attacks against unsuspecting users.
Another trend cited in the Experian data breach trends report, for example, is the “hacker in the sky” attack involving drones.
some cybercriminals are experimenting with so-called “deepfake” technology (a term coined in Reddit online forums in 2017), in which artificial intelligence (AI) algorithms are used to create false identities.




An example of “undue reliance?” Imagine a ransomware attack where client contact information was blocked…
In Weekend Outage, Diabetes Monitors Fail to Send Crucial Alerts
For many parents of children with diabetes, the Dexcom G6 continuous glucose monitor is a lifesaver. The device tracks their children’s glucose levels and sends them an alert when their blood sugar climbs too high or falls too low, allowing them to take quick action to correct it.
But around midnight on Friday, Dexcom suffered a mysterious service outage, leaving thousands of people who rely on the device for critical information in the dark. Many parents who woke up on Saturday morning and learned about the outage hours after it began had to scramble to make sure their children were safe. The affected service, Dexcom Follow, had been partly restored by Monday morning, a company spokesman said.




This may be in the future.
US Government Will Welcome Ethical Hackers
According to the Department of Homeland Security’s Cybersecurity and Infrastructure Agency (CISA), the US federal government hasn’t been gracious when presented with these voluntary reports. Some agencies ignore them, while some publish officious language on their sites threatening legal action if anyone tinkers with their systems. That isn’t helpful behaviour, it says. Now, it wants to change all that.
The Agency has published a proposed directive forcing agencies to play nicely with voluntary bug reporters. Under the draft rules, federal agencies would have to provide and monitor clear channels (an email or web form) through which people could report security flaws. They would also have to respond and keep researchers updated on efforts to fix the bugs.
The rules go beyond basic courtesy, though. Agencies could no longer publish threatening language discouraging bug hunters. Neither could they forbid hackers from publishing the bugs after waiting for an acceptable period.




Because the US Passport photo won’t serve?
From Papers, Please!
Buried in the latest Fall 2019 edition of an obscure Federal bureaucratic planning database called the Unified Agenda of Regulatory and Deregulatory Actions is an official notice from the U.S. Department of Homeland Security (DHS) that:
To facilitate the implementation of a seamless biometric entry-exit system that uses facial recognition … DHS is proposing to amend the regulations to provide that all travelers, including U.S. citizens, may be required to be photographed upon entry and/or departure [to or from the U.S.].
Read more on Papers, Please!




Been there, screwed that up too.”
If You’re Reading This Now It’s (Almost) Too Late (and Other GDPR Lessons)
January 1, 2020 is a landmark day for data privacy in the United States. It’s the day the biggest state in the union, indeed, the sixth biggest economy in the entire world, California, will enact its own piece of privacy-focused regulation, the California Consumer Privacy Act (or CCPA).
I want to address some of the most pervasive in hopes that they’ll bolster readers’ cases when lobbying their colleagues to get serious about the CCPA. Because after going through this before with the GDPR, I feel secure in saying it will represent a significant challenge for many businesses.
First, the six month “grace period” from January to July 2020 does not actually mean that companies can wait until July to ensure they’re compliant. It does not apply to the private right of action that consumers can exercise (with a value of up to $750 per consumer per breach incident). And the California Attorney General will be able to prosecute retroactively for companies who were in violation during the first six months – it’s true that the AG is likely to skew lenient during this period, but there’s nothing to stop them from taking a hard line if they see a case of gross negligence.
A common refrain I hear is “we just did this with the GDPR, so we don’t need to go back and do it all over again.”
This is often not true; it’s possible that a business, in preparing for GDPR, overspec’d so much that they unwittingly attained CCPA compliance. It’s much more likely that they did enough to scrape by GDPR, and, for example, dealt only with their European data. Most legacy businesses with a large footprint aren’t holding European and US customer data together. Even if they are, there are important aspects in which the CCPA is even more stringent than the GDPR – for example, regarding the Right to Equal Service and Prices.
Lastly, there’s the dangerous argument that a given business isn’t large or visible enough to incur regulatory wrath – that if you’re not a FAANG company the risk of privacy non-compliance is theoretical rather than practical. A simple look at the GDPR numbers demonstrates this is false. Enforcement started slow but has picked up significantly in 2019, as regulatory authorities found their footing. A running tracker hosted by CMS Law currently shows 86 different entities have been fined under GDPR, ranging from the world’s biggest companies to small merchants to the mayor of a small Belgian town.




Privacy for Twits? Interesting that what CCPA allows may be a violation of GDPR.
Twitter makes global changes to comply with privacy laws
Twitter Inc is updating its global privacy policy to give users more information about what data advertisers might receive and is launching a site to provide clarity on its data protection efforts, the company said on Monday.
Twitter also announced on Monday that it is moving the accounts of users outside of the United States and European Union which were previously contracted by Twitter International Company in Dublin, Ireland, to the San Francisco-based Twitter Inc.
The company said this move would allow it the flexibility to test different settings and controls with these users, such as additional opt-in or opt-out privacy preferences, that would likely be restricted by the General Data Protection Regulation (GDPR), Europe’s landmark digital privacy law.
We want to be able to experiment without immediately running afoul of the GDPR provisions
Twitter’s new privacy site, dubbed the ‘Twitter Privacy Center’ is part of the company’s efforts to showcase its work on data protection and will also give users another route to access and download their data.




So useful we may ignore the risks?
Amazon AI generates medical records from patient-doctor conversations
The company says its new software can understand medical jargon and automatically punctuate text.
Amazon believes its latest Web Services tool will help doctors spend more time with their patients. The tool, called Amazon Transcribe Medical, allows doctors to easily transcribe patient conversations and add those interactions to someone's medical records with the help of deep learning software.
… For Amazon, Transcribe Medical is just the company's latest foray into the lucrative healthcare industry. Earlier this year, the company announced Amazon Care, a service that allows employees to take advantage of virtual doctor consultations and in-home follow-ups. Moving forward, the issue Amazon is likely to face as it tries to convince both doctors and their patients to use Transcribe Medical is -- as always -- related to privacy.
Wood told CNBC the tool is fully compliant with the federal government's Health Insurance Portability and Accountability Act (HIPAA). Amazon, however, will likely have to go above and beyond the requirements of the law to satisfy privacy critics. HIPAA doesn't provide detailed guidance on how healthcare companies should secure digital patient medical records and hasn't been updated since 2013. The urgent need for updated legislation was highlighted earlier this year when a ProPublica report found that the records of some 5 million patients in the US were easily accessible with free software. The company will need to be specific about how any data will be used, and who has access to it.




One possible view.
The Ethical Threat of Artificial Intelligence in Practice
How do clinicians set rules that allow professionals "to make good use of technology to find patterns in complex data" but also "stop companies from extracting unethical value from those data?" asked Raymond Geis, MD.
Geis, from the American College of Radiology (ACR) Data Science Institute, is one of the authors of a joint statement that addresses the potential for the unethical use of data, the bias inherent in datasets, and the limits of algorithmic learning, and was the moderator of a session on the topic at the Radiological Society of North America (RSNA) 2019 Annual Meeting in Chicago.