Saturday, April 03, 2021

Things my Computer Security students should be able to monitor in real time and respond to with pre-planned procedures.

https://www.makeuseof.com/what-does-indicators-of-compromise-mean/

What Does Indicators of Compromise Mean? The Best Tools to Help Monitor Them

Indicators of Compromise provide clues and evidence regarding data breaches. Learn the importance of monitoring them and four tools that can help.

IoCs play an integral role in cybersecurity analysis. Not only do they reveal and confirm that a security attack has occurred but they also disclose the tools that were used to carry out the attack.

They are also helpful in determining the extent of the damage that a compromise has caused and assist in setting up benchmarks to prevent future compromises.

The IoCs are generally gathered through normal security solutions like anti-malware and anti-virus software but certain AI-based tools can also be used to collect these indicators during incident response efforts.





How to simultaneously raise the stakes and dig your self a deeper hole. Their fall will be spectacular. What ever you do, don’t consider that the evidence might be overwhelmingly against your version of the story.

https://www.databreaches.net/anonymous-tries-to-get-this-sites-post-on-mobikwik-censored/

Anonymous” tries to get this site’s post on MobiKwik censored

On March 30, DataBreaches.net posted an update to a controversial data breach that MobiKwik denies (previous coverage can be found here). The controversy subsequently escalated on Twitter when people started complaining that they had found their data in the leaked database and that it corresponded to what they had on file with MobiKwik. In addition to the shock and concern consumers felt about their data being available on the internet, there was anger at MobiKwik for denying responsiblity and for trying to threaten and smear the researcher who had notified them and then pursued responsible disclosure.

In what made denial seem like an extreme sport, MobiKwik even went so far as to suggest that their customers may have uploaded all their information to multiple platforms.

The researcher, Rajshekhar Rajaharia, provides a more detailed timeline of the controversy on Medium. For his efforts to protect consumers, Rajaharia has been defamed as “media-crazed,” threatened with litigation, and censored by LinkedIn and Twitter based on complaints by MobiKwik.

And now DataBreaches.net has been targeted because on March 30, I posted Mobikwik offers master class in how NOT to respond to a breach; researchers scoff, consumers rage.

Today, I received an email from this site’s web host. They were forwarding a complaint submitted to CloudFlare from “anonymous,” and they asked me to look at it. So I did.

Before I show you what “anonymous” wrote, let me remind everyone what it says on this site’s About page:

This site is a combination of news aggregation, investigative reporting, and commentary. You may disagree with my reporting or be offended by my opinions. If you think I’ve erred in my reporting, email and let me know what you think I got wrong. If you don’t like my commentary on a situation or on your handling of an incident, you’re free to send a statement for me to consider posting.
If you want to send me legal threats about my reporting or comments, knock yourself out, but don’t be surprised to see me report on your threat, any confidentiality sig blocks you may attach notwithstanding. I have been threatened with lawsuits many times, and to be blunt: there is NOTHING you can threaten me with that will scare me even 1/10th as much as the day both my kids got their driver’s licenses within 15 minutes of each other.

Even though I had tweeted to MobiKwik on March 4 to question their claim, they never responded. And even after I emailed MobiKwik to tell them that I didn’t find their denial credible, they never responded. They never reached out to this site after the one boilerplate denial. But today, “Anonymous” complained.

You may want to look at the post “Anonymous” is complaining about so you can evaluate how accurate — or inaccurate– their claims are:

Reporter: Anonymous
Reported URLs:
https://www.databreaches.net/mobikwik-offers-master-class-in-how-not-to-respond-to-a-breach-researchers-scoff-consumers-rage/
Logs or Evidence of Abuse: The Blogger Dissent at Databreaches.net is Linking hacked/leaked personal information from Raid Forums on her blog. The sole reason for linking personal information is on the intent to maliciously shame a company so they can admit to being hacked.
This is not right for this blog to link personal information just to manipulate, harass and Shame.

This was the main part of my response to my web host:

I reviewed the post that “anonymous” found objectionable. Their complaint is almost entirely unfounded:
1. There was and is absolutely NO link to RaidForums.com in the post the “anonymous” complainant links to. The forum is mentioned but there is no data on RaidForums linked to at all.
2. There is not even one iota of personal information reproduced or leaked in the post. In fact, the post heavily redacted images to prevent anything from being revealed. That said:
3. I have removed a link to a now defunct portal that allowed consumers to find out what data the company held on them that had been hacked and leaked. Since the company claimed — and claims — that the data were not real data and were not their data anyway, I’m hard-pressed to understand how they can now claim I am leaking their firm’s customers’ personal data, but I have removed the link anyway.
But that link to a portal is all that I am willing to remove as there’s no personal info leaked or linked to in the remaining post.
They just don’t want to be embarrassed by criticism so they try to chill protected speech.
[…]

Now here’s my response to “Anonymous:”

You may have been able to censor Rajaharia on LinkedIn and Twitter, and you will probably keep trying to censor me, but I’d encourage you to learn about the Streisand Effect, and take this caution seriously: I don’t tolerate bullies or people who try to chill protected speech. I *will* fight back. And if I want to characterize your incident response to date as an EPIC FAIL, yes, I can do that, too.

Oh look, I just did.





Tis indeed a puzzlement. Who can you get if liking Google or hating Google are equally disqualifying?

https://www.politico.com/news/2021/04/02/biden-doj-nominee-silicon-valley-478934

Biden struggling to fill DOJ job that could rein in Silicon Valley

White House ethics officials are raising objections about DOJ antitrust candidates who have represented companies complaining about major tech companies, particularly Google.

President Joe Biden’s search for the Justice Department’s top trust-busting role is being bogged down by ethics concerns, both about candidates who have represented Silicon Valley’s giants and those who have represented critics of the big tech companies.





Perspective. I imagine a 15 year old hacker being offered a commission as a Major…

https://www.ft.com/content/92a63e8b-36a3-477d-9bb4-3bcfb60cc7fa?segmentid=acee4131-99c2-09d3-a635-873e61754ec6

UK military relaxes recruiting rules to attract cyber specialists

Tech experts can now be taken directly into senior ranks without having to work their way up the hierarchy

The UK armed forces have relaxed hiring rules to allow candidates from the private sector to go directly into senior military roles, in a drive to recruit more cyber specialists as warfare expands into the digital realm.

General Sir Patrick Sanders, head of the UK’s Strategic Command, told the Financial Times that while he sometimes envied Israel’s conscription model — which allows defence chiefs to find the best cyber talent from a population-wide pool — the British military was finding new ways to attract tech experts.

“I’m interested in people who may want to come in and spend a bit of time in defence, gain their credentials, their credibility and then move in and out,” said Sanders, who was speaking on the FT’s Rachman Review podcast. “And so that idea of a much more flexible approach to a career in defence, encouraging ‘lateral’ entry, and also looking at people with very different entry standards to what we traditionally expect.”



Friday, April 02, 2021

They really want to do this. Perhaps as a future medical passport? Ready for the next pandemic?

https://www.nbcnews.com/tech/covid-passports-are-coming-not-easy-build-rcna554

The next vaccine challenge: Building a workable 'passport' app

Tech companies, nonprofits and state agencies are racing to build digital vaccine certificates, and the Biden administration may have a say in how they turn out.

The Biden administration said this week that it won’t build a national vaccination app, leaving it to the private sector to create mobile digital passports that can prove people have been vaccinated for Covid-19.

But that doesn’t mean the White House is going to be hands-off.

Technologists and consultants who are helping to design future digital vaccine cards said they are counting on the Biden administration to provide federal support for the effort, even if officials are working mostly behind the scenes to shape decisions related to privacy or where vaccine passports could be deployed





Once your data is exposed, the path to clean-up is not always obvious. Backups and archives.

https://www.databreaches.net/good-luck-explaining-to-hhs-why-your-phi-is-in-githubs-vault-for-the-next-1000-years/

Good Luck Explaining to HHS Why Your PHI is in GitHub’s Vault for the Next 1,000 Years

You may see a number of hospitals and covered entities issuing statements this week about a data security incident involving Med-Data (Med-Data, Incorporated). So far, Memorial Hermann, U. of Chicago, Aspirus, and OSF Healthcare have posted notices. Others should be or may be posting soon. Here’s DataBreaches.net’s exclusive report on the incident.

Another Day, Another GitHub Leak?

In August, 2020, Dutch independent security researcher Jelle Ursem and DataBreaches.net published a paper describing nine data leaks found on GItHub public repositories that involved protected health information.

In November, Ursem discovered yet another developer who had compromised the security of protected health information (PHI) by exposing it in public repositories. Much of the data appeared to involve claims data (Electronic Data Interchange or EDI data). Because the data was from a number of different clinical entities and involved claims data, it appeared to be a business associate that we were looking to identify. Our investigation into the data and covered entities suggested that the firm might be Med-Data.

On December 8, DataBreaches.net reached out to the firm, but neither Ursem nor this site could seem to get anyone to respond to our attempts to alert to them to their leak.

On December 10, after other methods (including a voicemail to the executive who had ignored me) failed, DataBreaches.net left a voicemail for Med-Data’s counsel. She promptly called back, and from then on, we were taken seriously. Note: this blogger is the “independent journalist” Med-Data’s substitute notice mentions contacting them on December 10, although we actually contacted them beginning on December 8.

On December 14, at their request, DataBreaches.net provided Med-Data with links to the repositories that were exposing protected health information. Med-Data’s statement indicates that the repositories were removed by December 17.

DataBreaches.net initially held off on reporting the incident for a few reasons, but then, to be honest, just totally forgot about it.

So What Happened?

When Med-Data investigated the exposure on GitHub, they discovered that a former employee had saved files to personal folders in public repositories (yes, more than one repository). The improper exposure had begun no later than September, 2019, although it might have begun earlier.

On February 5, 2021, cybersecurity specialists retained by Med-Data provided them with a list of individuals whose PHI was impacted by the incident. Med-Data reports:

A review of the impacted files revealed that they contained individuals’ names, in combination with one or more of the following data elements: physical address, date of birth, Social Security number, diagnosis, condition, claim information, date of service, subscriber ID (subscriber IDs may be Social Security numbers), medical procedure codes, provider name, and health insurance policy number.

That report is consistent with what we found in the exposed data.

Med-Data notified its clients on February 8, 2021 and mailed notices to impacted patients on March 31. Their notice does not explain why it took more than 60 days for notifications to be made. Those impacted were offered mitigation services with IDX.

In response to the incident, Med-Data has taken steps to minimize the risk of a similar event happening in the future. They

“implemented additional security controls, blocked all file sharing websites, updated internal data policies and procedures, implemented a security operations center, and deployed a managed detection and response solution.”

What they do not seem to have done yet, however, is to provide a clear way to alert them to a data security concern. Neither Ursem nor DataBreaches.net could find any link or contact method to convey a security concern. They need to provide a clear way to contact them to report a security issue — and to be sure that it is monitored by someone who can evaluate or escalate the report.

But Were All the Data Really Removed?

One issue that arose — and may still not be resolved as we have received no answer to our inquiry about this — involves GitHub’s Arctic Code Vault.

As GitHub explains the vault: the code vault is a data repository in a very-long-term archival facility. The archive is described as being located in a decommissioned coal mine in the Svalbard archipelago, closer to the North Pole than the Arctic Circle. GitHub reportedly captured a snapshot of every active public repository on 02/02/2020 and preserved that data in the Arctic Code Vault. More details about the vault can be found on GitHub.

So what happens if copyrighted material that should not have been in public repository is swept up into the vault? What happens if personal and sensitive material that never should have been in a public repository is swept up into the vault? What happened to some of Med-Data’s code that seems to have been swept into the vault (as indicated by the star showing that their developer and the repositories became a vault contributor):

When Ursem pointed out this vault issue Med-Data, they reached out to GitHub about getting logs for the vault and to discuss removal of code from the vault (depending on what the logs might show). We do not know what transpired after that, although there had been some muttering that Med-Data might sue GitHub to get the logs.

Did GitHub provide the logs? If so, what did they show? Is anyone’s PHI in GitHub’s Arctic Code Vault? And if so, what happens? Will GitHub remove it? Or will they claim they are immune from suit in the U.S. under Section 230 (if it still exists by then)? Or will code just be left there for researchers to explore in 1,000 years so they can wade through the personal and protected health information or other sensitive information of people who trusted others to protect their privacy?

In November, 2020, Ursem posed the question to GitHub on Twitter. They never replied.

We hope that GitHub cooperated with Med-Data, but we raise the issue here because we will bet you that many developers and firms have never even considered what might happen that could go so very wrong. This might be a good time to review our recommendations in “No Need to Hack When It’s Leaking,”

Update 8:01 pm: Post-publication, we found that King’s Daughters and SCL Health had also posted notices on the Med-Data breach. We know that there are other entities that should be disclosing, so this will be updated when we find their notices.





AI Governance?

https://hbr.org/2021/04/if-your-company-uses-ai-it-needs-an-institutional-review-board

If Your Company Uses AI, It Needs an Institutional Review Board

Summary. Companies that use AI know that they need to worry about ethics, but when they start, they tend to follow the same broken three-step process: They identify ethics with “fairness,” they focus on bias, and they look to use technical tools and stakeholder outreach to mitigate their risks. Unfortunately, this sets them up for failure. When it comes to AI, focusing on fairness and bias ignores a huge swath of ethical risks; many of these ethical problems defy technical solutions. Instead of trying to reinvent the wheel, companies should look to the medical profession, and adopt internal review boards (IRBs). IRBs, which are composed of diverse team of experts, are well suited to complex ethical questions. When given jurisdiction and power, and brought in early, they’re a powerful tool that can help companies think through hard ethical problems — saving money and brand reputation in the process





See? It can be done.

https://techxplore.com/news/2021-04-artificial-intelligence-algorithm.html

Researchers develop 'explainable' artificial intelligence algorithm

Sudhakar says that, broadly speaking, there are two methodologies to develop an XAI algorithm—each with advantages and drawbacks.

The first, known as back propagation, relies on the underlying AI architecture to quickly calculate how the network's prediction corresponds to its input. The second, known as perturbation, sacrifices some speed for accuracy and involves changing data inputs and tracking the corresponding outputs to determine the necessary compensation.

"Our partners at LG desired a new technology that combined the advantages of both," says Sudhakar. "They had an existing [machine learning] model that identified defective parts in LG products with displays, and our task was to improve the accuracy of the high-resolution heat maps of possible defects while maintaining an acceptable run time."

The team's resulting XAI algorithm, Semantic Input Sampling for Explanation (SISE), is described in a recent paper for the 35th AAAI Conference on Artificial Intelligence.

https://arxiv.org/abs/2102.07799





Resource?

https://www.princeton.edu/news/2021/04/01/hello-world-princeton-and-whyy-launch-new-podcast-ai-nation

Hello, World. Princeton and WHYY launch new podcast “A.I. Nation”

Decisions once made by people are increasingly being made by machines, often without transparency or accountability. In A.I. Nation,” a new podcast premiering on April 1, Princeton University and Philadelphia public radio station WHYY have partnered to explore the omnipresence of artificial intelligence (A.I.) and its implications for our everyday lives.

A.I. Nation” is co-hosted by Ed Felten, the Robert E. Kahn Professor of Computer Science and Public Affairs and founding director of Princeton’s Center for Information Technology Policy, and WHYY reporter Malcolm Burnley. Over the course of five episodes, the pair will investigate how artificial intelligence is affecting our lives right now, and the impact that technologies like machine learning, automation and predictive analytics will have on our future.

In episode one, Felten and Burnley experiment with GPT3, an NLP technology developed by Open AI, a research lab founded by Elon Musk and funded by Microsoft. While GPT3’s capabilities are incredible — it can write everything from novels to news stories — it can also be inconsistent. What is more alarming, however, is that the technology is capable of spreading misinformation. This, as Felten and Burnley discuss, is one of the reasons why Open AI believed its previous version of the model, GPT2, was too dangerous to release to the general public.

In episode two, “A.I. in the Driver’s Seat,” Burnley and Felten consider the safety, security and ethical implications of automated machines. Burnley tours a Princeton drone lab with Anirudha Majumdar, assistant professor of mechanical and aerospace engineering, to witness the A.I. behind drones in action. Felten and Burnley also discuss some of the reasons why self-driving vehicles, a technology that has been in development for decades, are still not available to the public and how they might be used in the near future.

New episodes — on “The Next Pandemic” (April 15), “Biased Intelligence” (April 22) and “Echo Chambers” (April 29) — will be released throughout the month.





Because Colorado.

https://www.makeuseof.com/iphone-apps-every-skier-and-snowboarder-needs/

8 iPhone Apps Every Skier and Snowboarder Needs



Thursday, April 01, 2021

For my security students.

https://www.tripwire.com/state-of-security/security-data-protection/role-of-encryption-in-gdpr-compliance/

Role of Encryption in GDPR Compliance

Today’s article is about one such data privacy law that repeatedly mentions the adoption of encryption. GDPR is a data privacy law in the EU that mentions the use of encryption. Although not mandatory, it is yet seen as a best practice for protecting personal data. So, let us first understand what data encryption is and then understand the role of encryption in GDPR compliance.





The role keeps changing…

https://www.csoonline.com/article/3332026/what-is-a-ciso-responsibilities-and-requirements-for-this-vital-leadership-role.html#tk.rss_all

What is a CISO? Responsibilities and requirements for this vital leadership role

Learn what it takes to land a CISO job and how to be successful in the role.





Timely.

https://www.bespacific.com/crs-biometric-technologies-and-global-security/

CRS – Biometric Technologies and Global Security

CRS In Focus – Biometric Technologies and Global Security March 30, 2021: “Biometric technologies use unique biological or behavioral attributes—such as DNA, fingerprints, cardiac signatures, voice or gait patterns, and facial or ocular measurements—to authenticate an individual’s identity. Although biometric technologies have been in use for decades, recent advances in artificial intelligence (AI) and Big Data analytics have expanded their application. As these technologies continue to mature and proliferate, largely driven by advances in the commercial sector, they will likely hold growing implications for congressional oversight, civil liberties, U.S. defense authorizations and appropriations, military and intelligence concepts of operations, and the future of war…”



(Related)

https://www.cpomagazine.com/data-privacy/white-collar-blue-collar-schism-at-apple-factory-workers-subject-to-collection-of-biometric-data-extra-security-measures/

White Collar / Blue Collar Schism at Apple: Factory Workers Subject to Collection of Biometric Data, Extra Security Measures

Apple has branded itself as the company that puts user privacy front and center, making bold moves to that end with the release of iOS 14. It has called privacy a “fundamental human right” and has said that its internal human rights policy applies to ” … business partners and people at every level of its supply chain.” However, it seems that some elements of the chain are more equal than others. A new company policy forbids manufacturing partners from collecting the biometric data of visiting Apple employees, but says nothing about the over one million workers that put Apple’s products together in these facilities.

These workers will also now be subject to tighter security controls mandated by Apple, which include criminal background checks, an expansion of surveillance cameras and new systems that track components during the assembly process and issue alerts when something is in one place for too long or not moving as expected.





Not yet a tool, but the start of specifications for a tool.

https://www.infoworld.com/article/3613832/4-key-tests-for-your-ai-explainability-toolkit.html

4 key tests for your AI explainability toolkit

Until recently, explainability was largely seen as an important but narrowly scoped requirement towards the end of the AI model development process. Now, explainability is being regarded as a multi-layered requirement that provides value throughout the machine learning lifecycle.

An enterprise-grade explainability solution must meet four key tests:

    1. Does it explain the outcomes that matter?

    2. Is it internally consistent?

    3. Can it perform reliably at scale?

    4. Can it satisfy rapidly evolving expectations?





I think their definition is unworkable.

https://www.natlawreview.com/article/cpsc-digs-artificial-intelligence

The CPSC Digs In on Artificial Intelligence

… On March 2, 2021, at a virtual forum attended by stakeholders across the entire industry, the Consumer Product Safety Commission (CPSC) reminded us all that it has the last say on regulating AI and machine learning consumer product safety.

… The CPSC defines AI as “any method for programming computers or products to enable them to carry out tasks or behaviors that would require intelligence if performed by humans” and machine learning as “an iterative process of applying models or algorithms to data sets to learn and detect patterns and/or perform tasks, such as prediction or decision making that can approximate some aspects of intelligence.”3



Wednesday, March 31, 2021

Sounds like a really nasty one. Which idiot listened to the lawyers?

https://krebsonsecurity.com/2021/03/whistleblower-ubiquiti-breach-catastrophic/

Whistleblower: Ubiquiti Breach “Catastrophic”

On Jan. 11, Ubiquiti Inc. [NYSE:UI] — a major vendor of cloud-enabled Internet of Things (IoT) devices such as routers, network video recorders and security cameras — disclosed that a breach involving a third-party cloud provider had exposed customer account credentials. Now a source who participated in the response to that breach alleges Ubiquiti massively downplayed a “catastrophic” incident to minimize the hit to its stock price, and that the third-party cloud provider claim was a fabrication.

A security professional at Ubiquiti who helped the company respond to the two-month breach beginning in December 2020 contacted KrebsOnSecurity after raising his concerns with both Ubiquiti’s whistleblower hotline and with European data protection authorities. The source — we’ll call him Adam — spoke on condition of anonymity for fear of retribution by Ubiquiti.

It was catastrophically worse than reported, and legal silenced and overruled efforts to decisively protect customers,” Adam wrote in a letter to the European Data Protection Supervisor. “The breach was massive, customer data was at risk, access to customers’ devices deployed in corporations and homes around the world was at risk.”

Ubiquiti has not responded to repeated requests for comment.

According to Adam, the hackers obtained full read/write access to Ubiquiti databases at Amazon Web Services (AWS), which was the alleged “third party” involved in the breach. Ubiquiti’s breach disclosure, he wrote, was “downplayed and purposefully written to imply that a 3rd party cloud vendor was at risk and that Ubiquiti was merely a casualty of that, instead of the target of the attack.”





This must have come up during the acquisition of ABT? I wonder how their contract handles this?

https://www.databreaches.net/fl-school-officials-investigate-possible-breach-involving-firm-they-never-used/

FL: School officials investigate possible breach involving firm they never used

John Henderson reports:

Alachua County school officials are investigating whether students’ personal information was compromised after a data breach in a computer system connected to school meal programs.
The district notified families of school children Monday that a letter sent out recently by PCS Revenue Control Systems Inc. — a company that handles computer services for reduced lunch programs — is legitimate.

We’ve been seeing a number of these notifications, as reported elsewhere on this site. But in this case, school officials note that they never had a contract with PCS Revenue Control.

Although our district has not used PCS Revenue Control Systems, we did use a company called Advanced Business Technologies (ABT) that was later taken over by PCS,” the letter said. “Our contract with ABT to gather information for families applying for free or reduced-price meals ended in 2016.”

So it seems that PCS got the district’s data when it acquired ABT, even though there was never a direct contract with PCS.

School district officials report they are having trouble getting in touch with PCS.





Governments (and the military they fund) are always behind the curve.

https://www.c4isrnet.com/artificial-intelligence/2021/03/30/jaic-director-pentagons-biggest-competitive-threat-obsolescence/

JAIC director: Pentagon’s biggest competitive threat? Obsolescence

The Pentagon’s top artificial intelligence official warned Tuesday that the department’s biggest competitive threat is obsolescence.

The biggest competitive threat is our own obsolescence,” said Lt. Gen. Michael Groen, director of the Joint Artificial Intelligence Center. “I could walk out into the parking lot of the Pentagon, turn on my iPhone and join a data-driven, completely integrated environment. I can get whatever services I want. I can review, I can find, I can research. I can do it all at my fingertips. I can’t do any of that on a defense network.”

We can’t operate that way. We can’t win that way. We can’t be competitive in that way,” he said during the Potomac Officers Club AI Summit.





Some interesting questions!

https://www.nextgov.com/emerging-tech/2021/03/regulators-want-know-how-financial-institutions-use-ai-and-how-theyre-mitigating-risks/173016/

Regulators Want to Know How Financial Institutions Use AI and How They’re Mitigating Risks

The financial sector is using forms of AI—including machine learning and natural language processing—to automate rote tasks and spot trends humans might miss. But new technologies always carry inherent risks, and AI has those same issues, as well as a host of its own.

On Wednesday, the Board of Governors of the Federal Reserve System, the Bureau of Consumer Financial Protection, the Federal Deposit Insurance Corporation, the National Credit Union Administration and the Office of the Comptroller of the Currency will publish a request for information in the Federal Register seeking feedback on AI uses and risk management in the financial sector.





Every little bit helps.

https://www.cpomagazine.com/data-privacy/keeping-up-with-privacy-legislation-easier-said-than-done/

Keeping Up with Privacy Legislation: Easier Said than Done

The privacy landscape has shifted dramatically over the past 12 months. From new hurdles including international data transfers to more than 20 new laws for COVID-19 regulatory requirements and living adjustments, privacy practitioners have a range of unprecedented new challenges to address. Legislation was introduced in 2020 to address the collection and use of biometric or facial recognition data by commercial entities. The outbreak of COVID-19 also led to the creation of new laws for regulating the protection of employee privacy. While the CCPA is one of the most well-known, in 2020 other states have also adopted their own privacy laws and requirements for businesses to implement and maintain reasonable security measures.

The following highlights significant data privacy developments:



(Related)

https://www.pogowasright.org/colorado-introduces-a-comprehensive-consumer-privacy-bill/

Colorado Introduces a Comprehensive Consumer Privacy Bill

Joseph J. Lazzarotti and Maya Atrakchi of JacksonLewis write:

Colorado recently become the latest state to consider a comprehensive consumer privacy law. On March 19, 2021, Colorado State Senators Rodriguez and Lundeen introduced SB 21-190, entitled “an Act Concerning additional protection of data relating to personal privacy”. Following California’s bold example of the California Consumer Privacy Act (“CCPA”) effective since January 2020, Virginia recently passed its own robust privacy law, the Consumer Data Protection Act (“CDPA”), and New York, as well as other states, like Florida, appear poised to follow suit. Furthermore, California is expanding protections provided by the CCPA, with the California Privacy Rights Act (CPRA) – approved by California voters under Proposition 24 in the November election.

Read more on Workplace Privacy, Data Management & Sec





And at appropriate (most suggestible) times, the device will whisper ads to our sleeping brains?

https://www.bespacific.com/in-bed-with-google-sleep-sensing-feature-prompts-privacy-worries/

In bed with Google: Sleep Sensing feature prompts privacy worries

CNET – The search giant already knows what you’re doing for much of your waking life.Google wants you to take its latest gadget with you into the bedroom. The marquee feature on the search giant’s new Nest Hub, a smart display released on Tuesday, is a tool called Sleep Sensing that tracks a person’s sleeping patterns by measuring motion and noise at their bedside. It can record when you fall asleep and wake up or how long it takes you to get to sleep. It knows if your slumber is interrupted during the night and how fast you’re breathing while asleep. It’s by no means the first sleep tracker to hit the market. But some privacy experts worry specifically about Google’s push into sleep data because of the company’s shaky track record when it comes to user privacy. The focus on sleep tracking underscores an uncomfortable reality about Google’s size and ubiquity. The tech giant already collects vast amounts of data about people in their waking lives: what they search for online, what videos they watch on YouTube and where they’ve traveled, from location data gathered through an Android phone or Google Maps. Now the company is zeroing in on the other half of people’s lives — what they’re doing when they’re not awake…”





A great idea few will ever use?

https://fortune.com/2021/03/30/humans-are-plagued-by-hidden-biases-a-i-can-help/

Humans are plagued by hidden biases. A.I. can help.

There are a lot of stories about A.I. systems picking up the human biases lurking in the data used to train them. But can A.I. also help humans uncover their own unconscious biases?

That’s what Apoorv Agarwal and Omar Haroun think. They are the co-founders of New York-based startup Text IQ. The company’s natural language processing software is primarily used by large businesses to keep track of personal identifying information in their datasets. It helps ensure companies don’t accidentally disclose this personal information in violation of legal requirements or compliance policies. Its software is also useful in cases when a company suffers a data breach and has to inform people whose personal identifying information may have been compromised.

But not too long ago, Agarwal and Haroun went through “unconscious bias training” of the kind that many company HR departments have instituted as part of their diversity and inclusion efforts. And the pair suddenly had a brainwave: they could turn Text IQ’s systems into a tool to help their customers with unconscious bias.





Think of this as a ‘feel good’ piece, written by the Terminator.

https://www.newyorker.com/culture/annals-of-inquiry/why-computers-wont-make-themselves-smarter

Why Computers Won’t Make Themselves Smarter

We fear and yearn for “the singularity.” But it will probably never come.





Perspective.

https://www.theverge.com/2021/3/30/22358005/volvo-aurora-autonomous-truck-partnership?scrolla=5eb6d68b7fedc32c19ef33b4

Volvo and Aurora team up on fully autonomous trucks for North America

Aurora has been testing its “Aurora Driver” hardware and software stack in its test fleet of minivans and Class 8 trucks in the Dallas-Fort Worth area since last year. Unlike its rivals, which are largely focused on robotaxi applications, the company has said that its first commercial service will be in trucking “where the market is largest today, the unit economics are best, and the level of service requirements is most accommodating.”





Tools.

https://www.theverge.com/2021/3/30/22358088/google-stack-ai-document-scanner-app-android-release-announce

Google’s Area 120 incubator releases a powerful AI document scanner for Android

Google’s Area 120, an internal incubator program for experimental projects, is releasing a new app today called Stack that borrows the technology underlying the search giant’s powerful DocAI enterprise tool for document analysis. The end result is a consumer document scanner app for Android that Google says vastly improves over your average mobile scanner by auto-categorizing documents into the titular stacks and enabling full text search through the contents of the documents and not just the title.

https://play.google.com/store/apps/details?id=com.area120.paperwork&hl=en_US&gl=US





Resources.

https://www.bespacific.com/guide-to-education-and-academic-resources-2021/

Guide to Education and Academic Resources 2021

Via LLRX Education and Academic Resources 2021 Marcus P. Zillmans guide comprises an extensive listing of resources and sites for students, researchers, teachers, infopros and parents, on multiple study areas. Sourced from academic, public, private, association and corporate sectors, the subject matters include: distance learning; MOOCs, lecture guides and study notes, study skill resources, online tutoring and homework help, free e-learning videos, scholarship resources and PhD, Dissertation, thesis, and academic writing resources.