Saturday, July 23, 2022

I know we were training people not to do this as far back as the 1960’s. This is why we include history in our security classes. There is always some young whipper-snapper who thinks people can be trusted.

https://www.bleepingcomputer.com/news/security/atlassian-confluence-hardcoded-password-was-leaked-patch-now/

Atlassian: Confluence hardcoded password was leaked, patch now!

Australian software firm Atlassian warned customers to immediately patch a critical vulnerability that provides remote attackers with hardcoded credentials to log into unpatched Confluence Server and Data Center servers.

As the company revealed this week, the Questions for Confluence app (installed on over 8,000 servers ) creates a disabledsystemuser account with a hardcoded password to help admins migrate data from the app to the Confluence Cloud.

One day after releasing security updates to address the vulnerability (tracked as CVE-2022-26138 ), Atlassian warned admins to patch their servers as soon as possible, given that the hardcoded password had been found and shared online.





You still need a deep understanding of the field to be sure…

https://news.mit.edu/2022/explained-how-tell-if-artificial-intelligence-working-way-we-want-0722

Explained: How to tell if artificial intelligence is working the way we want it to

The most popular types of local explanation methods fall into three broad categories.

The first and most widely used type of explanation method is known as feature attribution. Feature attribution methods show which features were most important when the model made a specific decision.

Features are the input variables that are fed to a machine-learning model and used in its prediction. When the data are tabular, features are drawn from the columns in a dataset (they are transformed using a variety of techniques so the model can process the raw data). For image-processing tasks, on the other hand, every pixel in an image is a feature. If a model predicts that an X-ray image shows cancer, for instance, the feature attribution method would highlight the pixels in that specific X-ray that were most important for the model’s prediction.

Essentially, feature attribution methods show what the model pays the most attention to when it makes a prediction.

Using this feature attribution explanation, you can check to see whether a spurious correlation is a concern. For instance, it will show if the pixels in a watermark are highlighted or if the pixels in an actual tumor are highlighted,” says Zhou.

A second type of explanation method is known as a counterfactual explanation. Given an input and a model’s prediction, these methods show how to change that input so it falls into another class. For instance, if a machine-learning model predicts that a borrower would be denied a loan, the counterfactual explanation shows what factors need to change so her loan application is accepted. Perhaps her credit score or income, both features used in the model’s prediction, need to be higher for her to be approved.

The good thing about this explanation method is it tells you exactly how you need to change the input to flip the decision, which could have practical usage. For someone who is applying for a mortgage and didn’t get it, this explanation would tell them what they need to do to achieve their desired outcome,” he says.

The third category of explanation methods are known as sample importance explanations. Unlike the others, this method requires access to the data that were used to train the model.

A sample importance explanation will show which training sample a model relied on most when it made a specific prediction; ideally, this is the most similar sample to the input data. This type of explanation is particularly useful if one observes a seemingly irrational prediction. There may have been a data entry error that affected a particular sample that was used to train the model. With this knowledge, one could fix that sample and retrain the model to improve its accuracy.



No comments: