Saturday, October 16, 2021

I don’t understand the thinking behind a response like this. Do they believe that suppressing the story is better than an honest explanation? I see it as, “You don’t know what happened so you want to prevent anyone else from figuring it out.” I think it is important to report these bad decisions in hopes that others think twice.

https://www.databreaches.net/shoot-the-messenger-friday-edition-homewood-health-resorts-to-threats-and-a-court-order/

Shoot the Messenger,” Friday edition: Homewood Health resorts to threats and a court order?

In July of this year, CTV News in Canada and DataBreaches.net reported on a breach involving Homewood Health in Canada. Both CTV and this site had become aware of the breach when data allegedly from Homewood showed up on a leak site called Marketo. Marketo claimed to have almost 300 GB of Homewood’s data for sale.

As is Marketo’s business model, they apparently first tried to get Homewood to pay them to remove the data from public access. When that failed, they started dumping small amounts of data as proof of claim and to increase pressure on Homewood to pay them.

And as is this site’s usual routine, DataBreaches.net reached out to the victim — in this case, Homewood Health — with questions about the incident. As more information and data became available to this site from Marketo’s site, those questions were expanded.

Homewood Health ignored all of this site’s inquiries. That is their right, of course, but by ignoring inquiries and opportunities, they deprived themselves of the opportunity to try to tell their side of the story or to provide a statement that would have made the use of any screencaps unnecessary. Instead they stonewalled and left this site in the position of using redacted screencaps to prove that this breach involved personal and sensitive information. It’s a shame that Homewood Health just didn’t acknowledge that forthrightly when asked repeatedly.

More than one month later, DataBreaches.net received a legal threat letter from Homewood’s external counsel — the Miller Thomson law firm.

The letter, which appears to be an attempt to intimidate this blogger and this site into destroying data and and chilling speech, contains patently false and defamatory claims about this blogger. I will respond to just a few of their allegations:

Your unauthorized publication of the Confidential Information and related unlawful actions constitute several violations of law, including but not limited to:
(a) conspiracy;
(b) defamation;
(c) extortion;
(d) unlawful interference with economic relations; and
(e) intentional infliction of emotional distress.

On July 21, 22, 23, and August 8, this site sent inquiries to Homewood seeking information and clarification about the breach. The inquiries were polite and contained no threats of anything, so it is not clear how Miller Thomson can claim that this blogger or site has engaged in any “extortion.” Nor is it clear how I allegedly “conspired” with anyone when I am a solo blogger. Does Miller Thomson consider getting information from a source “conspiring?”

Their other allegations are also refuted by the facts. Perhaps the lawyers just threw a bunch of allegations at the wall and hoped that some would stick?

The letter then goes on to make demands that basically attempt to censor reporting and decimate press freedom. Regular readers of this site already know how this site responds to attempts to chill speech and a free press.

So Homewood Health had multiple opportunities to issue a statement or to speak to me about the incident if they wished to try to have input to the reporting. They stonewalled this site and then resorted to legal threats. And they apparently convinced a court in Calgary to issue a court order. With all due respect to the Calgary court, I will not be responding to the court.

Great thanks to some terrific lawyers at Covington and Burling and Osler, Hoskin & Harcourt. Neither firm nor any of their employees are responsible for the opinions expressed in this blog post, however.


(Related) Maybe. Almost as good as “inventing a new sin.” Probably less profitable.

https://www.pogowasright.org/alberta-court-recognizes-new-tort-protecting-private-information/

Alberta Court Recognizes New Tort Protecting Private Information

Jennie Buchanan of Lawson Lundell LLP writes:

In ES v Shillington1, a decision issued last month, the Alberta Court of Queen’s Bench recognized the tort of Public Disclosure of Private Facts, a new cause of action that protects private information from public disclosure.2 Formal recognition of this tort in Alberta marks an important development in the law, giving additional legal protection to individuals’ information privacy rights at a time when the proliferation of technology makes it harder and harder to protect private information.
In order to establish liability for the tort of Public Disclosure of Private Facts, the plaintiff must prove that:
  1. the defendant publicized an aspect of the plaintiff’s private life;
  2. the plaintiff did not consent to the publication;
  3. the matter publicized or its publication would be highly offensive to a reasonable person in the position of the plaintiff; and
  4. the publication as not of legitimate concern to the public.3

Read more on Mondaq.

This ruling may explain why a Canadian firm went running to an Alberta court to try to get an order concerning my reporting about a data breach the firm experienced, but there are significant differences between my reporting on Homewood Health’s breach and the situation in ES v Shillington.



What were they thinking?

https://www.pogowasright.org/minneapolis-schools-gaggle-software-on-kids-devices-reports-gay-lgbtq-users-as-it-blocks-porn-finds-at-risk-of-self-harm/

Minneapolis Schools’ ‘Gaggle’ Software On Kids’ Devices Reports ‘Gay’, ‘LGBTQ’ Users As It Blocks Porn, Finds At-Risk of Self Harm

Towler Road reports:

Minneapolis Public Schools are using software to monitor student communications in and out of school, raising serious concerns over student privacy, according to the non-profit The 74 that has analyzed public records and just issued a report that raises serious concerns related to the use of the Gaggle software that can be used for 24-hour monitoring through school-provided tech devices, and includes Identifying and passing along student interest in keywords including “gay”, “LGBTQ” and others.

Read more on Towler Road.



I see this as much more difficult than trying to explain the initial algorithm. Perhaps there will be a market for someone (some AI?) who can examine the results of such decisions and explain the AI’s reasoning and how it changes over time?

https://www.cpomagazine.com/data-privacy/carnegie-mellon-university-end-users-deserve-right-to-explanation-about-how-algorithmic-decision-making-models-profile-them/

Carnegie Mellon University: End Users Deserve “Right to Explanation” About How Algorithmic Decision-Making Models Profile Them

Social media and the internet advertising industry now almost entirely run on algorithmic decision-making models that attempt to determine who the end user is, how their mind works and what they will be most receptive to (and engage with). Researchers at Carnegie Mellon University, fresh off of an analysis of these models published in Business Ethics Quarterly, are now advocating for a “right to explanation” to shed light on these secretive models that influence the mood, behavior and even actions of millions around the world each day.

The researchers examine this proposed right within the framework of existing General Data Protection Regulation (GDPR) rules, drawing a comparison to the established “right to be forgotten” (also a feature of certain other national data protection laws). Among other ideas, the paper imagines a new position of “data interpreter” to serve as a good faith liaison between the public and the output of these opaque algorithmic decision-making models.



Technology to watch?

https://techcrunch.com/2021/10/15/spot-ai-emerges-from-stealth-with-22m-with-a-platform-to-draw-out-more-intelligence-from-organizations-basic-security-videos/

Spot AI emerges from stealth with $22M for a platform to draw out more intelligence from organizations’ basic security videos

Security cameras, for better or for worse, are part and parcel of how many businesses monitor spaces in the workplace for security or operational reasons. Now, a startup is coming out of stealth with funding for tech designed to make the video produced by those cameras more useful. Spot AI has built a software platform that “reads” that video footage — regardless of the type or quality of camera it was created on — and makes video produced by those cameras searchable by anyone who needs it, both by way of words and by way of images in the frames shot by the cameras.

… Spot AI is entering the above market with all good intentions, CEO and co-founder Tanuj Thapliyal said in an interview. The startup’s theory is that security cameras are already important and the point is to figure out how to use them better, for more productive purposes that can cover not just security, but health and safety and operations working as they should.

“If you make the video data [produced by these cameras] more useful and accessible to more people in the workplace, then you transform it from this idea of surveillance to the idea of video intelligence,” said Thapliyal, who co-founded the company with Rish Gupta and Sud Bhatija.


No comments: