Saturday, October 16, 2021

I don’t understand the thinking behind a response like this. Do they believe that suppressing the story is better than an honest explanation? I see it as, “You don’t know what happened so you want to prevent anyone else from figuring it out.” I think it is important to report these bad decisions in hopes that others think twice.

https://www.databreaches.net/shoot-the-messenger-friday-edition-homewood-health-resorts-to-threats-and-a-court-order/

Shoot the Messenger,” Friday edition: Homewood Health resorts to threats and a court order?

In July of this year, CTV News in Canada and DataBreaches.net reported on a breach involving Homewood Health in Canada. Both CTV and this site had become aware of the breach when data allegedly from Homewood showed up on a leak site called Marketo. Marketo claimed to have almost 300 GB of Homewood’s data for sale.

As is Marketo’s business model, they apparently first tried to get Homewood to pay them to remove the data from public access. When that failed, they started dumping small amounts of data as proof of claim and to increase pressure on Homewood to pay them.

And as is this site’s usual routine, DataBreaches.net reached out to the victim — in this case, Homewood Health — with questions about the incident. As more information and data became available to this site from Marketo’s site, those questions were expanded.

Homewood Health ignored all of this site’s inquiries. That is their right, of course, but by ignoring inquiries and opportunities, they deprived themselves of the opportunity to try to tell their side of the story or to provide a statement that would have made the use of any screencaps unnecessary. Instead they stonewalled and left this site in the position of using redacted screencaps to prove that this breach involved personal and sensitive information. It’s a shame that Homewood Health just didn’t acknowledge that forthrightly when asked repeatedly.

More than one month later, DataBreaches.net received a legal threat letter from Homewood’s external counsel — the Miller Thomson law firm.

The letter, which appears to be an attempt to intimidate this blogger and this site into destroying data and and chilling speech, contains patently false and defamatory claims about this blogger. I will respond to just a few of their allegations:

Your unauthorized publication of the Confidential Information and related unlawful actions constitute several violations of law, including but not limited to:
(a) conspiracy;
(b) defamation;
(c) extortion;
(d) unlawful interference with economic relations; and
(e) intentional infliction of emotional distress.

On July 21, 22, 23, and August 8, this site sent inquiries to Homewood seeking information and clarification about the breach. The inquiries were polite and contained no threats of anything, so it is not clear how Miller Thomson can claim that this blogger or site has engaged in any “extortion.” Nor is it clear how I allegedly “conspired” with anyone when I am a solo blogger. Does Miller Thomson consider getting information from a source “conspiring?”

Their other allegations are also refuted by the facts. Perhaps the lawyers just threw a bunch of allegations at the wall and hoped that some would stick?

The letter then goes on to make demands that basically attempt to censor reporting and decimate press freedom. Regular readers of this site already know how this site responds to attempts to chill speech and a free press.

So Homewood Health had multiple opportunities to issue a statement or to speak to me about the incident if they wished to try to have input to the reporting. They stonewalled this site and then resorted to legal threats. And they apparently convinced a court in Calgary to issue a court order. With all due respect to the Calgary court, I will not be responding to the court.

Great thanks to some terrific lawyers at Covington and Burling and Osler, Hoskin & Harcourt. Neither firm nor any of their employees are responsible for the opinions expressed in this blog post, however.


(Related) Maybe. Almost as good as “inventing a new sin.” Probably less profitable.

https://www.pogowasright.org/alberta-court-recognizes-new-tort-protecting-private-information/

Alberta Court Recognizes New Tort Protecting Private Information

Jennie Buchanan of Lawson Lundell LLP writes:

In ES v Shillington1, a decision issued last month, the Alberta Court of Queen’s Bench recognized the tort of Public Disclosure of Private Facts, a new cause of action that protects private information from public disclosure.2 Formal recognition of this tort in Alberta marks an important development in the law, giving additional legal protection to individuals’ information privacy rights at a time when the proliferation of technology makes it harder and harder to protect private information.
In order to establish liability for the tort of Public Disclosure of Private Facts, the plaintiff must prove that:
  1. the defendant publicized an aspect of the plaintiff’s private life;
  2. the plaintiff did not consent to the publication;
  3. the matter publicized or its publication would be highly offensive to a reasonable person in the position of the plaintiff; and
  4. the publication as not of legitimate concern to the public.3

Read more on Mondaq.

This ruling may explain why a Canadian firm went running to an Alberta court to try to get an order concerning my reporting about a data breach the firm experienced, but there are significant differences between my reporting on Homewood Health’s breach and the situation in ES v Shillington.



What were they thinking?

https://www.pogowasright.org/minneapolis-schools-gaggle-software-on-kids-devices-reports-gay-lgbtq-users-as-it-blocks-porn-finds-at-risk-of-self-harm/

Minneapolis Schools’ ‘Gaggle’ Software On Kids’ Devices Reports ‘Gay’, ‘LGBTQ’ Users As It Blocks Porn, Finds At-Risk of Self Harm

Towler Road reports:

Minneapolis Public Schools are using software to monitor student communications in and out of school, raising serious concerns over student privacy, according to the non-profit The 74 that has analyzed public records and just issued a report that raises serious concerns related to the use of the Gaggle software that can be used for 24-hour monitoring through school-provided tech devices, and includes Identifying and passing along student interest in keywords including “gay”, “LGBTQ” and others.

Read more on Towler Road.



I see this as much more difficult than trying to explain the initial algorithm. Perhaps there will be a market for someone (some AI?) who can examine the results of such decisions and explain the AI’s reasoning and how it changes over time?

https://www.cpomagazine.com/data-privacy/carnegie-mellon-university-end-users-deserve-right-to-explanation-about-how-algorithmic-decision-making-models-profile-them/

Carnegie Mellon University: End Users Deserve “Right to Explanation” About How Algorithmic Decision-Making Models Profile Them

Social media and the internet advertising industry now almost entirely run on algorithmic decision-making models that attempt to determine who the end user is, how their mind works and what they will be most receptive to (and engage with). Researchers at Carnegie Mellon University, fresh off of an analysis of these models published in Business Ethics Quarterly, are now advocating for a “right to explanation” to shed light on these secretive models that influence the mood, behavior and even actions of millions around the world each day.

The researchers examine this proposed right within the framework of existing General Data Protection Regulation (GDPR) rules, drawing a comparison to the established “right to be forgotten” (also a feature of certain other national data protection laws). Among other ideas, the paper imagines a new position of “data interpreter” to serve as a good faith liaison between the public and the output of these opaque algorithmic decision-making models.



Technology to watch?

https://techcrunch.com/2021/10/15/spot-ai-emerges-from-stealth-with-22m-with-a-platform-to-draw-out-more-intelligence-from-organizations-basic-security-videos/

Spot AI emerges from stealth with $22M for a platform to draw out more intelligence from organizations’ basic security videos

Security cameras, for better or for worse, are part and parcel of how many businesses monitor spaces in the workplace for security or operational reasons. Now, a startup is coming out of stealth with funding for tech designed to make the video produced by those cameras more useful. Spot AI has built a software platform that “reads” that video footage — regardless of the type or quality of camera it was created on — and makes video produced by those cameras searchable by anyone who needs it, both by way of words and by way of images in the frames shot by the cameras.

… Spot AI is entering the above market with all good intentions, CEO and co-founder Tanuj Thapliyal said in an interview. The startup’s theory is that security cameras are already important and the point is to figure out how to use them better, for more productive purposes that can cover not just security, but health and safety and operations working as they should.

“If you make the video data [produced by these cameras] more useful and accessible to more people in the workplace, then you transform it from this idea of surveillance to the idea of video intelligence,” said Thapliyal, who co-founded the company with Rish Gupta and Sud Bhatija.


Friday, October 15, 2021

This can’t be right, can it?

https://threatpost.com/podcast-67-percent-orgs-ransomware/175339/

Podcast: 67% of Orgs Have Been Hit by Ransomware at Least Once

A recent report found that two-thirds, or 67 percent, of surveyed organizations have suffered a ransomware attack, about half have been hit multiple times, and 16 percent have been hit three or more times.

According to Fortinet’s Global State of Ransomware Report 2021 (PDF ), released last week, most organizations report that ransomware is their top most concerning cyber-threat. That’s particularly true for respondents in Latin America, Asia-Pacific and Europe-Middle East-Africa, who report that they’re more likely to be victims than their peers in the U.S. or Canada.



Overkill? No need to consult a judge or anyone in the target country?

https://www.databreaches.net/australia-to-tackle-ransomware-data-breaches-by-deleting-stolen-files/

Australia to tackle ransomware data breaches by deleting stolen files

Bill Toulas reports:

Australia’s Minister for Home Affairs has announced the “Australian Government’s Ransomware Action Plan,” which is a set of new measures the country will adopt in an attempt to tackle the rising threat.
[…]
To further strengthen the ability to conduct investigations and disrupt ransomware attacks, the government is looking to establish new powers through the Surveillance Legislation Amendment Act 2021.
Under this new legislation, the Australian Federal Police (AFP) and Australian Criminal Intelligence Commission (ACIC) will have the power to delete or remove data linked to suspected criminal activity, permitting access to devices and networks and even allowing the take over of online accounts for investigation purposes.

Read more on BleepingComputer.

So if this is in collaboration with other countries, is Australia claiming the right to take down data on servers in the U.S. or to seize devices of American journalists who may have data dumps or stolen data that they analyze for reporting purposes? Where do these new powers end?



The latest ‘shoot the messenger’ reaction.

https://krebsonsecurity.com/2021/10/missouri-governor-vows-to-prosecute-st-louis-post-dispatch-for-reporting-security-vulnerability/

Missouri Governor Vows to Prosecute St. Louis Post-Dispatch for Reporting Security Vulnerability

On Wednesday, the St. Louis Post-Dispatch ran a story about how its staff discovered and reported a security vulnerability in a Missouri state education website that exposed the Social Security numbers of 100,000 elementary and secondary teachers. In a press conference this morning, Missouri Gov. Mike Parson (R) said fixing the flaw could cost the state $50 million, and vowed his administration would seek to prosecute and investigate the “hackers” and anyone who aided the publication in its “attempt to embarrass the state and sell headlines for their news outlet.”

The Post-Dispatch says it discovered the vulnerability in a web application that allowed the public to search teacher certifications and credentials, and that more than 100,000 SSNs were available. The Missouri state Department of Elementary and Secondary Education (DESE) reportedly removed the affected pages from its website Tuesday after being notified of the problem by the publication (before the story on the flaw was published).

The newspaper said it found that teachers’ Social Security numbers were contained in the HTML source code of the pages involved. In other words, the information was available to anyone with a web browser who happened to also examine the site’s public code using Developer Tools or simply right-clicking on the page and viewing the source code.



Finally, a court that realizes that technology is not perfect!

https://www.bespacific.com/court-says-google-translate-isnt-reliable-enough-to-determine-consent-for-a-search/

Court Says Google Translate Isn’t Reliable Enough To Determine Consent For A Search

TechDirt: “The quickest way to a warrantless search is obtaining consent. But consent obtained by officers isn’t always consent, no matter how it’s portrayed in police reports and court testimony. Courts have sometimes pointed this out, stripping away ill-gotten search gains when consent turned out to be [extremely air quotation marks] “consent.” Such is the case in this court decision, brought to our attention by FourthAmendment.com. Language barriers are a thing, and it falls on officers of the law to ensure that those they’re speaking with understand clearly what they’re saying, especially when it comes to actions directly involving their rights. It all starts with a stop. A pretextual one at that, as you can see by the narrative recounted by the court…”



Shouldn’t we be able to ask Google for anything related to a crime?

https://www.bespacific.com/government-secretly-orders-google-to-identify/

Government Secretly Orders Google To Identify Anyone Who Searched A Sexual Assault Victim’s Name, Address And Telephone Number

Forbes: “The U.S. government is secretly ordering Google to provide data on anyone typing in certain search terms, an accidentally unsealed court document shows. There are fears such “keyword warrants” threaten to implicate innocent Web users in serious crimes and are more common than previously thought… It’s a rare example of a so-called keyword warrant and, with the number of search terms included, the broadest on record. Before this latest case, only two keyword warrants had been made public. One revealed in 2020 asked for anyone who had searched for the address of an arson victim who was a witness in the government’s racketeering case against singer R Kelly. Another, detailed in 2017, revealed that a Minnesota judge signed off on a warrant asking Google to provide information on anyone who searched a fraud victim’s name from within the city of Edina, where the crime took place. While Google deals with thousands of such orders every year, the keyword warrant is one of the more contentious. In many cases, the government will already have a specific Google account that they want information on and have proof it’s linked to a crime. But search term orders are effectively fishing expeditions, hoping to ensnare possible suspects whose identities the government does not know. It’s not dissimilar to so-called geofence warrants, where investigators ask Google to provide information on anyone within the location of a crime scene at a given time…”



The next outrage?

https://www.slashgear.com/facebook-ego4d-tracking-your-when-how-what-and-who-14695225/

Facebook Ego4D tracking your When, How, What, and Who

The aim of the researchers working with Facebook AI in this research is to develop artificial intelligence that “understands the world from this point of view”* so that they’re able to “unlock a new era of immersive experiences.” They’re looking specifically at how augmented reality (AR) glasses and virtual reality (VR) headsets will “become as useful in everyday life as smartphones.”

Researchers listed five “benchmark challenges” for this project that effectively show what they’re tracking. To be clear: Facebook isn’t tracking this data through real live devices for this project – not yet. This is all being tracked via first-person perspective videos Facebook AI attained for this project:

Episodic memory: What happened when?

Forecasting: What am I likely to do next?

Hand and object manipulation: What am I doing?

Audio-visual diarization: Who said what when?

Social interaction: Who is interacting with whom?

To learn more about this project, take a peek at the research paper Ego4D: Around the World in 3,000 Hours of Egocentric Video as published by arXiv.



Scary or reassuring?

https://www.theguardian.com/technology/2021/oct/15/ai-and-maths-to-play-bigger-role-in-global-diplomacy-says-expert

AI and maths to play bigger role in global diplomacy, says expert

… Michael Ambühl, a professor of negotiation and conflict management and former chief Swiss-EU negotiator, said recent advances in AI and machine learning mean that these technologies now have a meaningful part to play in international diplomacy, including at the Cop26 summit starting later this month and in post-Brexit deals on trade and immigration.

… The use of AI in international negotiations is at an early stage, he said, citing the use of machine learning to assess the integrity of data and detect fake news to ensure the diplomatic process has reliable foundations. In the future, these technologies could be used to identify patterns in economic data underpinning free trade deals and help standardise some aspects of negotiations.

The Lab for Science in Diplomacy, a collaboration between ETH Zürich where Ambühl is based and the University of Geneva, will also focus on “negotiation engineering”, where existing mathematical techniques such as game theory are used either to help frame a discussion, or to play out different scenarios before engaging in talks.


Thursday, October 14, 2021

Something for the litigious.

https://www.pogowasright.org/uk-judge-rules-that-a-neighbors-ring-doorbell-camera-had-breached-privacy-a-doctor-was-awarded-a-100000-payout-uk-judge-rules-that-after-a-judge-ruled-that-a-neighbors-rin/

UK judge rules that a neighbor’s Ring doorbell camera had ‘breached privacy;’ a doctor was awarded a £100,000 payout.

This could be precedent-setting. Alanis Hayal reports:

Dr Mary Fairhurst claimed her neighbour’s cameras left her feeling as though she was under “continuous visual surveillance” as a judge ruled that the footage captured breached Data Protection laws.
A doctor is set to receive £100,000 in compensation after a judge ruled that her neighbours’ Ring doorbell camera breached her privacy.
Dr Mary Fairhurst claimed she was forced to leave her home in Thame in Oxfordshire as the security cameras set up on her neighbour, on Jon Woodward’s property were too ‘intrusive.’

Read more on Brinkwire.



We can therefore we must. But, did anyone take the time to think this through?

https://www.pogowasright.org/7-eleven-biometric-data-collection-found-in-breach-of-australian-privacy-laws/

7-Eleven biometric data collection found in breach of Australian privacy laws

Zach Marzouk reports:

US convenience store chain 7-Eleven has been accused of breaching Australian privacy laws by collecting customers’ biometric data without their consent.
The Office of the Australian Information Commissioner (OAIC) found that between 15 June 2020 and 24 August 2021, the Australian arm of 7-Eleven interfered with the privacy of individuals by gathering facial recognition data through a hidden mechanism in its customer feedback form.

Read more on ITPro.

[From the article:

The store said it was capturing this data to detect if the same person was leaving multiple responses to

the survey within a 20 hour period on the same tablet. If they were, it wanted to exclude their responses from the survey results in case they weren’t genuine.



Serious recognition, but the name may lack some of the stature of a Nobel…

https://pratt.duke.edu/about/news/rudin-squirrel-award

Duke Professor Wins $1 Million Artificial Intelligence Prize, A ‘New Nobel’

While many scholars in the developing field of machine learning were focused on improving algorithms, Rudin instead wanted to use AI’s power to help society. She chose to pursue opportunities to apply machine learning techniques to important societal problems, and in the process, realized that AI’s potential is best unlocked when humans can peer inside and understand what it is doing.

Now, after 15 years of advocating for and developing “interpretable” machine learning algorithms that allow humans to see inside AI, Rudin’s contributions to the field have earned her the $1 million Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity from the Association for the Advancement of Artificial Intelligence (AAAI). Founded in 1979, AAAI serves as the prominent international scientific society serving AI researchers, practitioners and educators.



Or, AI could take that survey and report whatever conclusion it chooses… What do Russians know about democracy anyway?

https://www.wionews.com/world/scary-ex-kremlin-mastermind-predicts-formation-of-humanless-democracies-420499

Scary! Ex-Kremlin mastermind predicts formation of 'humanless democracies'

… Now, an ex-Kremlin mastermind Vladislav Surkov has predicted that next 100 years will see advent of 'humanless democracies'. Machines will control everything and humans will increasingly get relegated to sidelines.

… In his piece written for the publication publication ‘Actual Comment', Surkov says that in next 100 years, internet will take a form of direct democracy. There will no longer be a need to elect a government. Instead, anyone who wants to can vote on a proposed legislation.

“For example, if you need another law on beekeeping, everyone who cares – beekeepers, honey enthusiasts, beauticians, pharmacists, people who have been stung by bees, people with allergies, lawyers, hive and smokehouse manufacturers, beeophiles and bee-haters, as well as those who always care about everything – can all directly participate in its drafting, introduction, discussion and adoption,” says Surkov as quoted by Russia Today.

There is no parliament in this scheme. In its place are communication tools, algorithms and moderators.”

Surkov paints a scary picture saying that people will be slowly removed from such a process and that machines will eliminate the human factor. He says that humans will have more and more comfort but will have less and less say in how things are run.



Tools & Techniques. There is a free version...

https://futurism.com/you-can-use-artificial-intelligence-to-take-your-presentations-to-the-next-level

You Can Use Artificial Intelligence to Take Your Presentations to the Next Level

if you want a solution that truly takes your presentations to the next level, you need to check out Beautiful.ai.

When it comes to visual communication, there are really five basic qualities you want your slide presentation to have. It should feature minimal text that supports rather than repeats what you are saying. It should feature slides with a variety of layouts to avoid monotony and maintain visual interest. It should feature beautiful visuals such as graphs, charts, statistics, and images that support your key takeaways. It should be meticulously consistent and coherent in style and formatting. And it should stay on brand by incorporating your company or organization’s color palette and logos.


Wednesday, October 13, 2021

October is ‘Lets commit a cybercrime’ month.

https://www.csoonline.com/article/3636161/october-is-high-season-for-cyberattacks-infosec-institute-study-shows.html#tk.rss_all

October is high season for cyberattacks, Infosec Institute study shows

There has been an exponential increase in cyberattacks around the globe in the last five years and a major chunk of it happened in October each year, according to a study by Infosec Institute.

A similar offensive appears to be building up this month, judging from the study's projections for an "October surprise" as well as observations of cyberattacks that have occurred so far.

The study underscores that the attacks that have occurred in the month of October in the past five years have been traced back to mainly five offending entities — Russia, China, North Korea, Iran, and a catchall grouping termed anonymous. The anonymous grouping is used to refer to unclaimed attacks with unknown assailants and could not be linked to any offending parties or nations.



Defense against one type of attack…

https://www.bespacific.com/how-to-combat-the-most-prevalent-ransomware-threats/

How to combat the most prevalent ransomware threats

Tech Republic:Ransomware has turned into one of the most devastating cyberthreats as criminal gangs launch destructive attacks against specific industries and organizations. Attackers also have upped their game through multiple strategies, such as the double-extortion tactic in which they vow to publicly release the stolen data unless the ransom is paid. In its latest Advanced Threat Research Report, McAfee looks at the most prominent ransomware strains for the second quarter of 2021 and offers advice on how to combat them…”



All resources welcome.

https://www.pogowasright.org/new-resources-on-privacy-harms/

New resources on privacy harms

Professors Daniel Solove and Danielle Citron have revised their important article, Privacy Harms, forthcoming 102 B.U. Law Review __ (2022). You can download the latest draft for free on SSRN.

Among other things,” Dan writes, “we rethought the typology to add top-level categories and subcategories.”

Other papers on harms that the two have co-authored:



Perhaps it should protect properly programmed (not self-taught) replicants?

https://www.bespacific.com/the-first-amendment-does-not-protect-replicants/

The First Amendment Does Not Protect Replicants

Lessig, Lawrence, The First Amendment Does Not Protect Replicants (September 10, 2021). Social Media and Democracy (Lee Bollinger & Geoffrey Stone, eds., Oxford 2022), Forthcoming, Available at SSRN: https://ssrn.com/abstract=3922565 or http://dx.doi.org/10.2139/ssrn.3922565

As the semantic capability of computer systems increases, the law should resolve clearly whether the First Amendment protects machine speech. This essay argues it should not be read to reach sufficiently sophisticated — “replicant” — speech.”



A look at AI.

https://sifted.eu/articles/artificial-intelligence-startups-safety/

Artificial intelligence is becoming a ‘force multiplier’ — for good and bad

Some of the military applications of AI that are being developed are “pretty terrifying,” says Ian Hogarth in the State of AI report.

Artificial intelligence is rapidly moving out of the lab and becoming a technological “force multiplier” in an ever-widening range of real-world cases, including drug development, healthcare, energy, logistics and defence, according to the latest State of AI report.

In their fourth annual report, a 188-slide monster pack that provides one of the most useful snapshots of the sector, Nathan Benaich and Ian Hogarth flag the most interesting developments in AI over the past year in terms of research, people, industry and politics.



Deep geek!

https://www.technologyreview.com/2021/10/13/1037027/podcast-the-story-of-artificial-intelligence/

Podcast: The story of AI, as told by the people who invented it

Welcome to I Was There When, a new oral history project from the In Machines We Trust podcast. It features stories of how breakthroughs in artificial intelligence and computing happened, as told by the people who witnessed them. In this first episode, we meet Joseph Atick— who helped create the first commercially viable face recognition system.



Perspective. Podcast

https://knowledge.wharton.upenn.edu/article/why-the-u-s-housing-boom-isnt-a-bubble/

Why the U.S. Housing Boom Isn’t a Bubble

While the red-hot real estate market is finally showing signs of cooling, its meteoric rise has many Americans wondering if housing prices are a bubble that is about to burst, much like the collapse that triggered the Great Recession.

Wharton real estate and finance professor Benjamin Keys says that’s not the case.

I come down very strongly against that view. I don’t think that it’s likely that we’re going to see a bubble burst in the way that we saw in 2008, 2009, and 2010,” he said during an interview with Wharton Business Daily on SiriusXM. (Listen to the podcast above.)



For my next math class...

https://www.makeuseof.com/tools-solve-math-problems/

6 Tools to Help You Solve Difficult Math Problems

Math is fairly tricky, so what better way to get help than with the tech in your pocket? Here are six tools to help you solve difficult math problems.



A security resource.

https://www.freetech4teachers.com/2021/10/cybersecurity-awareness-month-safety.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+freetech4teachers/cGEY+(Free+Technology+for+Teachers)

Cybersecurity Awareness Month - Safety Tips Sheets, Posters, and Lesson Plans


Tuesday, October 12, 2021

Intermediation still has a place in the age of disintermediation?

https://www.networkworld.com/article/3636499/edge-computing-the-architecture-of-the-future.html#tk.rss_all

Edge computing: The architecture of the future

As technology extends deeper into every aspect of business, the tip of the spear is often some device at the outer edge of the network, whether a connected industrial controller, a soil moisture sensor, a smartphone, or a security cam.

This ballooning internet of things is already collecting petabytes of data, some of it processed for analysis and some of it immediately actionable. So an architectural problem arises: You don’t want to connect all those devices and stream all that data directly to some centralized cloud or company data center. The latency and data transfer costs are too high.

That’s where edge computing comes in. It provides the “intermediating infrastructure and critical services between core datacenters and intelligent endpoints,” as the research firm IDC puts it. In other words, edge computing provides a vital layer of compute and storage physically close to IoT endpoints, so that control devices can respond with low latency – and edge analytics processing can reduce the amount of data that needs to be transferred to the core.

In “Proving the value of analytics on the edge,” CIO contributor Bob Violino offers three case studies that illustrate the benefits of edge architecture.



Do I have the right to test my DNA?

https://www.insideprivacy.com/health-privacy/california-governor-signs-legislation-to-expand-genetic-privacy-protections-after-last-years-veto/

California Governor Signs Legislation to Expand Genetic Privacy Protections After Last Year’s Veto

On Wednesday, October 6th, Governor Gavin Newsom signed SB 41, the Genetic Information Privacy Act, which expands genetic privacy protections for consumers in California, including those interacting with direct-to-consumer (“DTC”) genetic testing companies. In a recent Inside Privacy blog post, our colleagues discussed SB 41 and the growing patchwork of state genetic privacy laws across the United States. Read the post here.


(Related)

https://www.pogowasright.org/newly-effective-florida-law-imposing-criminal-sanctions-adds-to-developing-nationwide-patchwork-of-state-genetic-privacy-laws/

Newly Effective Florida Law Imposing Criminal Sanctions Adds to Developing Nationwide Patchwork of State Genetic Privacy Laws

Libbie Canter and Rebecca Yergin of Covington and Burling write:

[On] October 1, the Protecting DNA Privacy Act (HB 833 ), a new genetic privacy law, went into effect in the state of Florida establishing four new crimes related to the unlawful use of another person’s DNA. While the criminal penalties in HB 833 are notable, Florida is not alone in its focus on increased genetic privacy protections. A growing number of states, including Utah, Arizona, and California, have begun developing a net of genetic privacy protections to fill gaps in federal and other state legislation, often focused on the privacy practices of direct-to-consumer (“DTC”) genetic testing companies.

Read more on InsidePrivacy.



There is a lot I don’t normally include in this blog…

https://www.pogowasright.org/privacy-news-from-here-and-there-2/

Privacy news from here and there….

In case you missed it:

Ad trackers continue to collect Europeans’ data without consent:
https://digiday.com/media/ad-trackers-continue-to-collect-europeans-data-without-consent-under-the-gdpr-say-ad-data-detectives/

Wyoming school put on lockdown after one student refuses to wear face mask, and is arrested:
https://trib.com/news/state-and-regional/student-arrested-after-standoff-over-laramie-high-school-mask-requirement/article_1540b4b6-bad7-5d19-98d3-5162afce680a.html

Idaho Supreme Court: A Police dog’s nose inside a car window before alerting is a search and a Jones trespass
https://isc.idaho.gov/opinions/47367.pdf

Amazon Is Building a Smart Fridge That Knows What You Eat:
“Not content with just knowing your face, your voice, your fingerprint, and the blueprint of your home, Amazon also wants to know what you’re eating.”
https://www.businessinsider.com/amazon-is-building-smart-fridge-that-monitors-your-buying-patterns-2021-10

CA- Social Media Surveillance By The Los Angeles Police Department:
https://securityboulevard.com/2021/10/social-media-surveillance-by-law-enforcement-avast/

Find these and many more on Joe Cadillic’s MassPrivateI.



Always an amusing question.

https://www.makeuseof.com/recording-online-classes/

Can Teachers Record Your Online Classes?

Many schools made the switch from the traditional classroom to online platforms, and there are still a lot of questions people have about the new setup. Now that classes physically switch from a school building to students’ homes, online classes become a walking invasion of personal privacy.

Monitoring students sitting at their desks is one thing, but when you use literal webcams, the opportunity for sneaky practices, like recording, skyrockets. The thought of a teacher recording you sitting at your laptop is disturbing, but is it illegal? Let's take a look and find out.



For ‘splainin to people.

https://www.makeuseof.com/what-is-surveillance-capitalism/

What Is Surveillance Capitalism?

Coined by Harvard Professor Shoshana Zuboff, surveillance capitalism is an economic system centered around the commodification of personal data with a core purpose of making profit.

In theory, surveillance capitalism helps businesses create better products, hold efficient inventory, and serve customers exactly when they need as soon as possible. By accurately pinpointing or swaying supply and demand, surveillance capitalism opens up a world of endless convenience.

However, the promised efficiency of surveillance captalism doesn't necessarily mean it is ethical.



Lawyers using tech? Absolutely. Lawyers designing tech? Far less common, but highly desirable. (Start with: “I hate spending time doing …”)

https://www.nasdaq.com/articles/legal-technology%3A-why-the-legal-tech-boom-is-just-getting-started-2021-10-11

Legal Technology: Why the Legal Tech Boom is Just Getting Started

In quick succession, legal technology finally saw its first IPOs:

With private money pouring into legal tech startups and based on our own conversations inside the industry, we at LexFusion expect more IPOs on the horizon. Thus, a primer on legal tech as a category to watch. This Part I summarizes the legal market fundamentals driving unprecedented investment in enabling tech—much of which extends beyond the boundaries implied by “legal” as a descriptor.

Size the prize – it’s bigger than you think. We estimate a current market size of $14 billion across 3 related categories that used to require heavy touches from lawyers: legal tech, compliance (RegTech) & contracting (Ktech).



Don’t blame the poor AI when you show it data that documents a biased process and then tell it “this is how we do things.”

https://www.ft.com/content/7e42c58e-b3d4-4db5-9ddf-7e6c4b853366

A global AI bill of rights is desperately needed

Algorithmic decision-making has long put technology first, with due diligence an afterthought

It is becoming increasingly hard to spot evidence of human judgment in the wild. Automated decision-making can now influence recruitment, mortgage approvals and prison sentencing.

The rise of the machine, however, has been accompanied by growing evidence of algorithmic bias. Algorithms, trained on real-world data sets, can mirror the bias baked into the human deliberations they usurp. The effect has been to magnify rather than reduce discrimination, with women being sidelined for jobs as computer programmers and black patients being de-prioritised for kidney transplants.


(Related) Maybe not too surprising.

https://sloanreview.mit.edu/article/the-human-factor-in-ai-based-decision-making/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+mitsmr+%28MIT+Sloan+Management+Review%29

The Human Factor in AI-Based Decision-Making

Facing identical AI inputs, individuals make entirely different choices based on their own decision-making styles.

In AI-augmented decision processes, where algorithms provide suggestions and information, executives still have the final say. Salesforce, for example, uses its own AI program, called Einstein, to help leaders make important decisions in executive-level meetings. According to Salesforce’s visionary founder and CEO, Marc Benioff, the company’s decision-making processes have changed entirely since AI was introduced. Einstein’s guidance, whether on the performance of different regions or products or on new business opportunities, has helped to significantly reduce bias in meetings and decrease discussions driven by politics or personal agendas among members of the top management team.4

Our research reveals that this human filter makes all the difference in organizations’ AI-based decisions. Data analysis shows that there is no single, universal human response to AI. Quite the opposite: One of our most surprising findings is that individuals make entirely different choices based on identical AI inputs.



Perspective. (I’ve mentioned this before…)

https://techxplore.com/news/2021-10-artificial-intelligence-everyday-power-double-edged.html

Artificial intelligence is now part of our everyday lives, and its growing power is a double-edged sword

A major new report on the state of artificial intelligence (AI) has just been released. Think of it as the AI equivalent of an Intergovernmental Panel on Climate Change report, in that it identifies where AI is at today, and the promise and perils in view.

The report argues that we are at an inflection point where researchers and governments must think and act carefully to contain the risks AI presents and make the most of its benefits.




Worth a peak. Anything surprising?

https://www.cnbc.com/2021/10/11/the-10-fastest-growing-science-and-technology-jobs-of-the-next-decade.html

The 10 fastest-growing science and technology jobs of the next decade

according to new data from the Bureau of Labor Statistics, demand for jobs in math, science and technology will continue to surge over the next decade.

Hiring in the computer and information technology fields has faster projected growth between 2020 and 2030 than all other fields.



Perspective.

https://www.pcmag.com/news/your-next-training-session-might-be-taught-by-an-ai

Your Next Training Session Might be Taught by an AI

These days, education is more important to businesses than ever. Not only do companies need to keep employees properly trained and certified, but employers also have to be mindful of how their remote employees are educating their children: Parents who are dissatisfied with how their kids are learning or who are even resorting to homeschooling will probably demonstrate the impact of those burdens in terms of productivity. One option that could make both of those scenarios easier is using artificial intelligence (AI) for teaching—and it's not as far-fetched as you might think.

A recent study by Tidio, an AI chatbot developer for apps such as help desks, shows that 53% of its US respondents said they'd be fine with an AI teaching their kids.