Tuesday, October 05, 2021

Wouldn’t you expect greater testing before such a sensitive update and a plan to quickly reverse things if anything went wrong?

https://krebsonsecurity.com/2021/10/what-happened-to-facebook-instagram-whatsapp/

What Happened to Facebook, Instagram, & WhatsApp?

Facebook and its sister properties Instagram and WhatsApp are suffering from ongoing, global outages. We don’t yet know why this happened, but the how is clear: Earlier this morning, something inside Facebook caused the company to revoke key digital records that tell computers and other Internet-enabled devices how to find these destinations online.

Doug Madory is director of internet analysis at Kentik, a San Francisco-based network monitoring company. Madory said at approximately 11:39 a.m. ET today (15:39 UTC), someone at Facebook caused an update to be made to the company’s Border Gateway Protocol (BGP) records. BGP is a mechanism by which Internet service providers of the world share information about which providers are responsible for routing Internet traffic to which specific groups of Internet addresses.

In simpler terms, sometime this morning Facebook took away the map telling the world’s computers how to find its various online properties. As a result, when one types Facebook.com into a web browser, the browser has no idea where to find Facebook.com, and so returns an error page.



Repetition to reiterate the need for redundant preparation.

https://www.csoonline.com/article/3635590/think-you-are-prepared-for-ransomware-you-re-probably-not.html#tk.rss_all

Think You Are Prepared for Ransomware? You’re Probably Not.

Ransomware has increased nearly 1100% over the last year according to FortiGuard Labs research, impacting organizations of all sizes and across all market sectors. And according to Fortinet’s State of Ransomware survey, 96% of organizations indicate that they are concerned about the threat of a ransomware attack, with 85% reporting that they are more worried about a ransomware attack than any other cyber threat. As a result, preparing for a ransomware attack has become a boardroom issue and a top priority for CISOs worldwide.



Have we arrived at a perfect solution already?

https://www.weforum.org/agenda/2021/10/facial-recognition-technology-law-enforcement-human-rights/

This is best practice for using facial recognition in law enforcement

A new white paper from the World Economic Forum, in partnership with the International Criminal Police Organization (INTERPOL), the Centre for Artificial Intelligence and Robotics of the United Nations Interregional Crime and Justice Research Institute (UNICRI) and the Netherlands police, offers a framework to ensure the responsible use of facial recognition technology.

In April 2021, the European Commission (EC) released its much-awaited Artificial Intelligence Act, a comprehensive regulatory proposal that classifies AI applications under distinct categories of risks. Among the identified high-risk applications, remote biometric systems, which include facial recognition technology (FRT), were singled out as particularly concerning. Their deployment, specifically in the field of law enforcement, may lead to human rights abuses in the absence of robust governance mechanisms.



My AI says there are more than five things…

https://fpf.org/blog/five-things-lawyers-need-to-know-about-ai/

FIVE THINGS LAWYERS NEED TO KNOW ABOUT AI

Note: This article is part of a larger series focused on managing the risks of artificial intelligence (AI) and analytics, tailored toward legal and privacy personnel. The series is a joint collaboration between bnh.ai, a boutique law firm specializing in AI and analytics, and the Future of Privacy Forum, a non-profit focusing on data governance for emerging technologies. Download .pdf here.

Behind all the hype, AI is an early-stage, high-risk technology that creates complex grounds for discrimination while also posing privacy, security, and other liability concerns. Given recent EU proposals and FTC guidance, AI is fast becoming a major topic of concern for lawyers. Because AI has the potential to transform industries and entire markets, those at the cutting edge of legal practice are naturally bullish about the opportunity to help their clients capture its economic value. Yet to act effectively as counsel, lawyers must also be vigilant of the very real challenges of AI. Lawyers are trained to respond to risks that threaten the market position or operating capital of their clients. However, when it comes to AI, it can be difficult for lawyers to provide the best guidance without some basic technical knowledge. This article shares some key insights from our shared experiences to help lawyers feel more at ease responding to AI questions when they arise.



Which human values? This book is not (yet) in my local library.

https://www.nature.com/articles/d41586-021-02693-2

Reboot AI with human values

In AI We Trust: Power, Illusion and Control of Predictive Algorithms Helga Nowotny Polity (2021)

In the 1980s, a plaque at NASA’s Johnson Space Center in Houston, Texas, declared: “In God we trust. All others must bring data.” Helga Nowotny’s latest book, In AI We Trust, is more than a play on the first phrase in this quote attributed to statistician W. Edwards Deming. It is most occupied with the second idea.

What happens, Nowotny asks, when we deploy artificial intelligence (AI) without interrogating its effectiveness, simply trusting that it ‘works’? What happens when we fail to take a data-driven approach to things that are themselves data driven? And what about when AI is shaped and influenced by human bias? Data can be inaccurate, of poor quality or missing. And technologies are, Nowotny reminds us, “intrinsically intertwined with conscious or unconscious bias since they reflect existing inequalities and discriminatory practices in society”.



Geek out! Lots to learn here. (Plenty of buzzwords ti make you seem smart!)

https://www.infoworld.com/article/3634602/explainable-ai-explained.html

Explainable AI explained

Explainable AI (XAI), also called interpretable AI, refers to machine learning and deep learning methods that can explain their decisions in a way that humans can understand. The hope is that XAI will eventually become just as accurate as black-box models.

Explainability can be ante-hoc (directly interpretable white-box models) or post-hoc (techniques to explain a previously trained model or its prediction). Ante-hoc models include explainable neural networks (xNNs), explainable boosting machines (EBMs), supersparse linear integer models (SLIMs), reversed time attention model (RETAIN), and Bayesian deep learning (BDL).

Post-hoc explainability methods include local interpretable model-agnostic explanations (LIME) as well as local and global visualizations of model predictions such as accumulated local effect (ALE) plots, one-dimensional and two-dimensional partial dependence plots (PDPs), individual conditional expectation (ICE) plots, and decision tree surrogate models.


No comments: