How do you measure up?
https://www.databreaches.net/heres-the-breakdown-of-cybersecurity-stats-only-law-firms-usually-see/
Here’s the breakdown of cybersecurity stats only law firms usually see
Joe Uchill has a good interview with Craig Hoffman of BakerHostetler about their recent report that includes their extensive incident response experiences handling ransomware incidents.
BakerHostetler has always been one of my most trusted resources on breach responses, as they are quite blunt about their advice — even when it may be what government or other companies promote. They were the first to be clear that despite the warnings about not getting data back even if you pay ransom, almost all of their clients who paid ransom did get their data back. But that was with experts involved in making the decision to pay or not pay.
Read more on SC Magazine.
Previews of coming attractions?
https://www.pogowasright.org/austrian-dpa-has-option-to-fine-google-up-to-e6-billion/
Austrian DPA has option to fine Google up to €6 billion
From noyb.eu:
Google continues to send data from EU websites to the US – despite two Court of Justice rulings. Austrian Data Protection Authority could fine Google up to €6 billion.
Last summer, the European Court of Justice (CJEU) ruled – already for the second time – that US surveillance laws generally make the transfer of personal data from the EU to the US illegal. Google continues to ignore this decision and now argues before the Austrian DSB (PDF) that it may continue to transfer data on millions of visitors of EU websites to the US – in blatant contradiction to the GDPR. The Austrian data protection authority (DSB) now has the option to fine Google up to €6 billion under the GDPR.
Read more on noyb.eu.
What level of un-biased accuracy would bring it back? 90%? 99%?
https://nypost.com/2021/05/06/states-push-back-against-use-of-facial-recognition-by-police/
States push back against use of facial recognition by police
… At least seven states and nearly two dozen cities have limited government use of the technology amid fears over civil rights violations, racial bias and invasion of privacy. Debate over additional bans, limits and reporting requirements has been underway in about 20 state capitals this legislative session, according to data compiled by the Electronic Privacy Information Center.
… Complaints about false identifications prompted Amazon, Microsoft and IBM to pause sales of their software to police, though most departments hire lesser-known firms that specialize in police contracts. Wrongful arrests of Black men have gained attention in Detroit and New Jersey after the technology was blamed for mistaking their images for those of others.
(Related)
EU Proposes Heavy Regulation of “High Risk” Artificial Intelligence as Activists Call for Facial Recognition Ban
EU officials are considering wide-ranging regulation that would include heavy restrictions on a range of “high risk” AI applications as well as facial recognition systems used by law enforcement. A leaked document also indicates that a facial recognition ban for cases of “indiscriminate” and “generalized” mass surveillance is also being considered, but privacy watchdogs in the region would like to see things taken a step further and have the technology made entirely unavailable to the police.
… An AI application categorized as “high risk” would be subject to special inspections, including examination of how its data sets are trained. These would include financial applications, college admissions, employment and critical infrastructure among other examples. Some categories might face an outright ban if deemed to be an “unacceptable risk”; examples cited here include “manipulating behavior to circumvent free will,” “targeting vulnerable groups” and using “subliminal techniques.” The risk level of an application would be determined by specific criteria including intended purpose, the number of people potentially affected and how irreversible the potential harm might be. The majority of AI applications, those that use relatively simple rule-based systems (such as chatbots and video games), would be considered low enough risk to not be subject to these regulations.
Still a bit unclear. This looks like a support organization for “privacy vendors.”
https://www.coindesk.com/organizations-data-privacy-protocol-alliance-dppa
Over 20 Organizations Form Alliance to Focus on Data Privacy and Monetization
Over 20 businesses worldwide announced the creation of the Data Privacy Protocol Alliance (DPPA) yesterday. DPPA is set to build a decentralized blockchain-based data system that it hopes will compete against data monopolies such as Google or Facebook by allowing users to take control of their own data.
Specifically, the Data Privacy Protocol Alliance will develop a set of guidelines and specifications for a version of CasperLabs’ layer-one blockchain “optimized for data sharing, data storage, data ownership, and data monetization,” according to the announcement.
The Casper Network is a proof-of-stake network where businesses can build private or permissioned applications. The network also claims to offer upgradeable smart contracts, predictable gas fees and the ability to support scale.
All self-driving cars are the Terminator in disguise. Hence, “Terminator bias!”
https://www.bespacific.com/judging-autonomous-vehicles/
Judging Autonomous Vehicles
Rachlinski, Jeffrey John and Wistrich, Andrew J., Judging Autonomous Vehicles (March 17, 2021). Available at SSRN: https://ssrn.com/abstract=3806580 or http://dx.doi.org/10.2139/ssrn.3806580
“The introduction of any new technology challenges judges to determine how it into existing liability schemes. If judges choose poorly, they can unleash novel injuries on society without redress or stifle progress by overburdening a technological breakthrough. The emergence of self-driving, or autonomous, vehicles will present an enormous challenge of this sort to judges, as this technology will alter the foundation of the largest source of civil liability in the United States. Although regulatory agencies will determine when and how autonomous cars may be placed into service, judges will likely play a central role in defining the standards for liability for them. How will judges treat this new technology? People commonly exhibit biases against innovations such as a naturalness bias, in which people disfavor injuries arising from artificial sources. In this paper we present data from 933 trial judges showing that judges exhibit bias against self-driving vehicles. They both assigned more liability to a self-driving vehicle than they would to a human-driven vehicle and treated injuries caused by a self-driving vehicle as more serious than injuries caused by a human-driven vehicle.”
Tools.
Knowt Now Offers Public Galleries of Notes, Flashcards, and Quizzes
Knowt is a neat service that I've featured a few times over the last couple of years. It's a service that will automatically generate flashcards and quizzes from any document that you import into it. The latest update to Knowt provides registered teachers and students with a public gallery of notes, quizzes, and flashcards.
Now when you sign into a free Knowt account you have the option to browse for notes, flashcards, and quizzes according to subject area. There is also a gallery of notes, quizzes, and flashcards based on popular textbooks. All of the notes, quizzes, and flashcards found through the public galleries in Knowt can be copied directly into your account where you can modify them as you like.
Here's Knowt's promo video for their new galleries of notes, quizzes, and flashcards. And here's my overview of how to use Knowt to create your own notes, quizzes, and flashcards by importing a document into your account.
No comments:
Post a Comment