Friday, May 20, 2022

This is not a “Get out of jail free” card for my Ethical Hackers. More a “Stay out of jail, IF ...” card.

https://www.theregister.com/2022/05/20/cfaa_rule_change/

US won’t prosecute ‘good faith’ security researchers under CFAA

The US Justice Department has directed prosecutors not to charge "good-faith security researchers" with violating the Computer Fraud and Abuse Act (CFAA) if their reasons for hacking are ethical — things like bug hunting, responsible vulnerability disclosure, or above-board penetration testing.

Good-faith, according to the policy [PDF], means using a computer "solely for purposes of good-faith testing, investigation, and/or correction of a security flaw or vulnerability."





Illustrating complexity for my students.

https://www.cpomagazine.com/data-protection/data-privacy-conundrum-when-different-states-play-by-different-rules/

Data Privacy Conundrum: When Different States Play by Different Rules. . .

It’s been less than two and a half years since the California Consumer Privacy Act, also known as CCPA, went into effect, but the influence of that signature legislation is already incalculable. Like General Data Privacy Regulation (GDPR), the European mandate that came before it, this set of wide-ranging regulations has fundamentally changed the conversation on data privacy and reset the clock on what government can and should do to protect consumers’ personal information.

Even CCPA won’t be CCPA much longer—when 2024 arrives, it’ll be CPRA, or the California Privacy Rights Act, which encompasses its predecessor while establishing more stringent measures (and enforcement bodies to make sure they stick). However, there are even bigger changes on the horizon, and they potentially affect every company doing business in every state.





To Bio or not to Bio? (And other interesting questions)

https://fpf.org/blog/when-is-a-biometric-no-longer-a-biometric/

WHEN IS A BIOMETRIC NO LONGER A BIOMETRIC?

In October 2021, the White House Office of Science and Technology (OSTP) published a Request for Information (RFI) regarding uses, harms, and recommendations for biometric technologies. Over 130 entities responded to the RFI, including advocacy organizations, scientists, experts in healthcare, lawyers, and technology companies. While most commenters agreed on core concepts of biometric technologies used to identify or verify identity (with differences in how to address it in policy), there was clear division as to what extent the law should apply to emerging technologies used for physical detection and characterization (such as skin cancer detection or diagnostic tools). These comments reveal that there is no general consensus on what “biometrics” should entail and thus what the applicable scope of law should be.





...and humans shall have the rights AI shall grant them, and no more.

https://www.bespacific.com/human-rights-and-algorithmic-opacity/

Human Rights, and Algorithmic Opacity

Lu, Sylvia Si-Wei, Data Privacy, Human Rights, and Algorithmic Opacity (May 6, 2022). California Law Review, Vol. 110, 2022 Forthcoming, Available at SSRN: https://ssrn.com/abstract=4004716

Decades ago, it was difficult to imagine a reality in which artificial intelligence (AI) could penetrate every corner of our lives to monitor our innermost selves for commercial interests. Within a few decades, the private sector has seen a wild proliferation of AI systems, many of which are more powerful and penetrating than anticipated. In many cases, machine-learning-based AI systems have become “the power behind the throne,” tracking user activities and making fateful decisions through predictive analysis of personal information. However, machine-learning algorithms can be technically complex and legally claimed as trade secrets, creating an opacity that hinders oversight of AI systems. Accordingly, many AI-based services and products have been found to be invasive, manipulative, and biased, eroding privacy rules and human rights in modern society. “The emergence of advanced AI systems thus generates a deeper tension between algorithmic secrecy and data privacy. Yet, in today’s policy debate, algorithmic transparency in a privacy context is an issue that is equally important but managerially disregarded, commercially evasive, and legally unactualized. This Note illustrates how regulators should rethink strategies regarding transparency for privacy protection through the interplay of human rights, disclosure regulations, and whistleblowing systems. It discusses how machine-learning algorithms threaten privacy protection through algorithmic opacity, assesses the effectiveness of the EU’s response to privacy issues raised by opaque AI systems, demonstrates the GDPR’s inadequacy in addressing privacy issues caused by algorithmic opacity, and proposes new algorithmic transparency strategies toward privacy protection, along with a broad array of policy implications and suggested moves. The analytical results indicate that in a world where algorithmic opacity has become a strategic tool for firms to escape accountability, regulators in the EU, the US, and elsewhere should adopt a human-rights-based approach to impose a social transparency duty on firms deploying high-risk AI techniques.”





Perspective.

https://www.techrepublic.com/article/ai-remains-priority-ceos-gartner-survey/

AI remains priority for CEOs, according to new Gartner survey

For the third year running, AI is the top priority for CEOs, according to a survey of CEOs and senior executives released by Gartner on Wednesday.

The survey “2022 CEO Survey — The Year Perspectives Changed gauged the opinions of CEOs and top executives on a range of issues from the workforce to the environment and digitalization. The findings also revealed that the metaverse, which has received a lot of hype in the last year, especially since the rebranding of Facebook to Meta, is not as relevant to business leaders – 63% say that they do not see the metaverse as a key technology for their organization.





For my students.

https://insights.dice.com/2022/05/20/are-there-a-lot-of-artificial-intelligence-a-i-jobs-right-now/

Are There a Lot of Artificial Intelligence (A.I.) Jobs Right Now?

Interested in a career in machine learning and artificial intelligence (A.I.)? Curious about the number of opportunities out there? A new breakdown shows that A.I. remains a highly specialized field with relatively few job openings—but that will almost certainly change in coming years.

CompTIA’s monthly Tech Jobs Report reveals that states with the largest tech hubs—including California, Texas, Washington, and Massachusetts—lead when it comes to A.I.-related job postings



No comments: