Friday, April 02, 2021

They really want to do this. Perhaps as a future medical passport? Ready for the next pandemic?

https://www.nbcnews.com/tech/covid-passports-are-coming-not-easy-build-rcna554

The next vaccine challenge: Building a workable 'passport' app

Tech companies, nonprofits and state agencies are racing to build digital vaccine certificates, and the Biden administration may have a say in how they turn out.

The Biden administration said this week that it won’t build a national vaccination app, leaving it to the private sector to create mobile digital passports that can prove people have been vaccinated for Covid-19.

But that doesn’t mean the White House is going to be hands-off.

Technologists and consultants who are helping to design future digital vaccine cards said they are counting on the Biden administration to provide federal support for the effort, even if officials are working mostly behind the scenes to shape decisions related to privacy or where vaccine passports could be deployed





Once your data is exposed, the path to clean-up is not always obvious. Backups and archives.

https://www.databreaches.net/good-luck-explaining-to-hhs-why-your-phi-is-in-githubs-vault-for-the-next-1000-years/

Good Luck Explaining to HHS Why Your PHI is in GitHub’s Vault for the Next 1,000 Years

You may see a number of hospitals and covered entities issuing statements this week about a data security incident involving Med-Data (Med-Data, Incorporated). So far, Memorial Hermann, U. of Chicago, Aspirus, and OSF Healthcare have posted notices. Others should be or may be posting soon. Here’s DataBreaches.net’s exclusive report on the incident.

Another Day, Another GitHub Leak?

In August, 2020, Dutch independent security researcher Jelle Ursem and DataBreaches.net published a paper describing nine data leaks found on GItHub public repositories that involved protected health information.

In November, Ursem discovered yet another developer who had compromised the security of protected health information (PHI) by exposing it in public repositories. Much of the data appeared to involve claims data (Electronic Data Interchange or EDI data). Because the data was from a number of different clinical entities and involved claims data, it appeared to be a business associate that we were looking to identify. Our investigation into the data and covered entities suggested that the firm might be Med-Data.

On December 8, DataBreaches.net reached out to the firm, but neither Ursem nor this site could seem to get anyone to respond to our attempts to alert to them to their leak.

On December 10, after other methods (including a voicemail to the executive who had ignored me) failed, DataBreaches.net left a voicemail for Med-Data’s counsel. She promptly called back, and from then on, we were taken seriously. Note: this blogger is the “independent journalist” Med-Data’s substitute notice mentions contacting them on December 10, although we actually contacted them beginning on December 8.

On December 14, at their request, DataBreaches.net provided Med-Data with links to the repositories that were exposing protected health information. Med-Data’s statement indicates that the repositories were removed by December 17.

DataBreaches.net initially held off on reporting the incident for a few reasons, but then, to be honest, just totally forgot about it.

So What Happened?

When Med-Data investigated the exposure on GitHub, they discovered that a former employee had saved files to personal folders in public repositories (yes, more than one repository). The improper exposure had begun no later than September, 2019, although it might have begun earlier.

On February 5, 2021, cybersecurity specialists retained by Med-Data provided them with a list of individuals whose PHI was impacted by the incident. Med-Data reports:

A review of the impacted files revealed that they contained individuals’ names, in combination with one or more of the following data elements: physical address, date of birth, Social Security number, diagnosis, condition, claim information, date of service, subscriber ID (subscriber IDs may be Social Security numbers), medical procedure codes, provider name, and health insurance policy number.

That report is consistent with what we found in the exposed data.

Med-Data notified its clients on February 8, 2021 and mailed notices to impacted patients on March 31. Their notice does not explain why it took more than 60 days for notifications to be made. Those impacted were offered mitigation services with IDX.

In response to the incident, Med-Data has taken steps to minimize the risk of a similar event happening in the future. They

“implemented additional security controls, blocked all file sharing websites, updated internal data policies and procedures, implemented a security operations center, and deployed a managed detection and response solution.”

What they do not seem to have done yet, however, is to provide a clear way to alert them to a data security concern. Neither Ursem nor DataBreaches.net could find any link or contact method to convey a security concern. They need to provide a clear way to contact them to report a security issue — and to be sure that it is monitored by someone who can evaluate or escalate the report.

But Were All the Data Really Removed?

One issue that arose — and may still not be resolved as we have received no answer to our inquiry about this — involves GitHub’s Arctic Code Vault.

As GitHub explains the vault: the code vault is a data repository in a very-long-term archival facility. The archive is described as being located in a decommissioned coal mine in the Svalbard archipelago, closer to the North Pole than the Arctic Circle. GitHub reportedly captured a snapshot of every active public repository on 02/02/2020 and preserved that data in the Arctic Code Vault. More details about the vault can be found on GitHub.

So what happens if copyrighted material that should not have been in public repository is swept up into the vault? What happens if personal and sensitive material that never should have been in a public repository is swept up into the vault? What happened to some of Med-Data’s code that seems to have been swept into the vault (as indicated by the star showing that their developer and the repositories became a vault contributor):

When Ursem pointed out this vault issue Med-Data, they reached out to GitHub about getting logs for the vault and to discuss removal of code from the vault (depending on what the logs might show). We do not know what transpired after that, although there had been some muttering that Med-Data might sue GitHub to get the logs.

Did GitHub provide the logs? If so, what did they show? Is anyone’s PHI in GitHub’s Arctic Code Vault? And if so, what happens? Will GitHub remove it? Or will they claim they are immune from suit in the U.S. under Section 230 (if it still exists by then)? Or will code just be left there for researchers to explore in 1,000 years so they can wade through the personal and protected health information or other sensitive information of people who trusted others to protect their privacy?

In November, 2020, Ursem posed the question to GitHub on Twitter. They never replied.

We hope that GitHub cooperated with Med-Data, but we raise the issue here because we will bet you that many developers and firms have never even considered what might happen that could go so very wrong. This might be a good time to review our recommendations in “No Need to Hack When It’s Leaking,”

Update 8:01 pm: Post-publication, we found that King’s Daughters and SCL Health had also posted notices on the Med-Data breach. We know that there are other entities that should be disclosing, so this will be updated when we find their notices.





AI Governance?

https://hbr.org/2021/04/if-your-company-uses-ai-it-needs-an-institutional-review-board

If Your Company Uses AI, It Needs an Institutional Review Board

Summary. Companies that use AI know that they need to worry about ethics, but when they start, they tend to follow the same broken three-step process: They identify ethics with “fairness,” they focus on bias, and they look to use technical tools and stakeholder outreach to mitigate their risks. Unfortunately, this sets them up for failure. When it comes to AI, focusing on fairness and bias ignores a huge swath of ethical risks; many of these ethical problems defy technical solutions. Instead of trying to reinvent the wheel, companies should look to the medical profession, and adopt internal review boards (IRBs). IRBs, which are composed of diverse team of experts, are well suited to complex ethical questions. When given jurisdiction and power, and brought in early, they’re a powerful tool that can help companies think through hard ethical problems — saving money and brand reputation in the process





See? It can be done.

https://techxplore.com/news/2021-04-artificial-intelligence-algorithm.html

Researchers develop 'explainable' artificial intelligence algorithm

Sudhakar says that, broadly speaking, there are two methodologies to develop an XAI algorithm—each with advantages and drawbacks.

The first, known as back propagation, relies on the underlying AI architecture to quickly calculate how the network's prediction corresponds to its input. The second, known as perturbation, sacrifices some speed for accuracy and involves changing data inputs and tracking the corresponding outputs to determine the necessary compensation.

"Our partners at LG desired a new technology that combined the advantages of both," says Sudhakar. "They had an existing [machine learning] model that identified defective parts in LG products with displays, and our task was to improve the accuracy of the high-resolution heat maps of possible defects while maintaining an acceptable run time."

The team's resulting XAI algorithm, Semantic Input Sampling for Explanation (SISE), is described in a recent paper for the 35th AAAI Conference on Artificial Intelligence.

https://arxiv.org/abs/2102.07799





Resource?

https://www.princeton.edu/news/2021/04/01/hello-world-princeton-and-whyy-launch-new-podcast-ai-nation

Hello, World. Princeton and WHYY launch new podcast “A.I. Nation”

Decisions once made by people are increasingly being made by machines, often without transparency or accountability. In A.I. Nation,” a new podcast premiering on April 1, Princeton University and Philadelphia public radio station WHYY have partnered to explore the omnipresence of artificial intelligence (A.I.) and its implications for our everyday lives.

A.I. Nation” is co-hosted by Ed Felten, the Robert E. Kahn Professor of Computer Science and Public Affairs and founding director of Princeton’s Center for Information Technology Policy, and WHYY reporter Malcolm Burnley. Over the course of five episodes, the pair will investigate how artificial intelligence is affecting our lives right now, and the impact that technologies like machine learning, automation and predictive analytics will have on our future.

In episode one, Felten and Burnley experiment with GPT3, an NLP technology developed by Open AI, a research lab founded by Elon Musk and funded by Microsoft. While GPT3’s capabilities are incredible — it can write everything from novels to news stories — it can also be inconsistent. What is more alarming, however, is that the technology is capable of spreading misinformation. This, as Felten and Burnley discuss, is one of the reasons why Open AI believed its previous version of the model, GPT2, was too dangerous to release to the general public.

In episode two, “A.I. in the Driver’s Seat,” Burnley and Felten consider the safety, security and ethical implications of automated machines. Burnley tours a Princeton drone lab with Anirudha Majumdar, assistant professor of mechanical and aerospace engineering, to witness the A.I. behind drones in action. Felten and Burnley also discuss some of the reasons why self-driving vehicles, a technology that has been in development for decades, are still not available to the public and how they might be used in the near future.

New episodes — on “The Next Pandemic” (April 15), “Biased Intelligence” (April 22) and “Echo Chambers” (April 29) — will be released throughout the month.





Because Colorado.

https://www.makeuseof.com/iphone-apps-every-skier-and-snowboarder-needs/

8 iPhone Apps Every Skier and Snowboarder Needs



No comments: