Monday, May 31, 2021

Something to add to security breach laws…

https://www.databreaches.net/ethical-disclosures-are-being-ignored-an-unchecked-security-crisis/

Ethical disclosures are being ignored: an unchecked security crisis

Ron Nahamias, Cyberpion co-founder and CBO, has a piece in Security Magazine that includes a topic near and dear to my heart — companies that do not provide a way to notify them of a security breach, leak, or vulnerability. He writes, in part:

Sometimes the burying of the head in the sand, even if it’s borne out of desperation and a practice of being overworked and understaffed, turns into something deliberate.
But while companies are dragging their feet, bad actors are mobilizing their armies. In my own work, I’ve met CISOs — more than I care to admit — who create an email address that doesn’t even fit their company’s standard. This makes them harder to contact, and therefore, essentially impossible to alert. Some organizations’ existing disclosure programs are even designated as “top secret,” bound by strict NDAs and accessible by invitation only. The drawbridge is always up; the moat is considered impossible. And what organizations don’t know, they are not beholden to either address or resolve. I’ve also run into plenty of organizations who declare outright that they don’t want to want to receive disclosures, because they have no desire and / or no capacity to deal with the liabilities created by them.

Read more on Security Magazine.

For the last 15 years, I have been loudly yelling that entities should be required to have monitored accounts/contact information displayed on their web site that tells people how to contact them to report a breach (or leak or any security concern). The current situation remains lopsided: ethical researchers have a duty to disclose responsibly, but entities have no legal obligation to make such disclosure possible and successful by providing working and monitored contact methods.

Isn’t it time the government made this mandatory? Isn’t it time, already?





Another industry hackers can shut down at will?

https://news.softpedia.com/news/jbs-foods-shuts-down-due-to-massive-cyberattack-533076.shtml

JBS Foods Shuts Down Due to Massive Cyberattack

According to The Sydney Morning Herald, the attack affected JBS Foods facilities in Australia as well as in the United States, Canada, and other nations. JBS-owned Primo Foods is Australia's largest producer of ham, bacon, salami, and sausages, and operates meat plants and beef fattening facilities.

Cattle and lamb production was halted Monday at all JBS meat plants in Australia after an attack on the company's information systems over the weekend. Farmers and grocers are uncertain how long JBS will be down, and thousands of meat workers are worried about losing their jobs.





A little education never hurt.

https://threatpost.com/taxonomy-evolution-ransomware/166462/

On the Taxonomy and Evolution of Ransomware

Given the frequency with which “ransomware” appears in news articles, it may be worthwhile to take a step back and actually consider what the term means. Any malware or attack that culminates in extorting ransom from the victim is commonly referred to as ransomware. The general idea is to encrypt the victims’ data and to promise to deliver the key needed to decrypt it in return for a paid ransom.

But there are very different types of attacks which are all called “ransomware.” Let’s start by dissecting them.





No surprise. Anyone can look at technology and conclude, “I can use that in my business!” (Even if their business is crime.)

https://www.politico.eu/article/artificial-intelligence-criminals/

One group that’s embraced AI: Criminals

This article is part of “The age of surveillance,a special report on artificial intelligence.

Lawmakers are still figuring out how best to use artificial intelligence. Lawbreakers are doing the same.

"We have crime-as-a-service, we have AI-as-a-service," said Philipp Amann, head of strategy at EU law enforcement agency's Europol's European Cybercrime Centre. "We'll have AI-for-crime-as-a-service too."

Malicious uses of artificial intelligence can range from AI-powered malware, AI-powered fake social media accounts farming, distributed denial-of-service attacks aided by AI, deep generative models to create fake data and AI-supported password cracking, according to a report by the EU's cybersecurity agency published in December.

Europol, together with cybersecurity firm Trend Micro and the U.N.'s research institute UNICRI, found software that guesses passwords based on an AI-powered analysis of 1.4 billion leaked passwords, allowing hackers to gain access to systems quicker.

They also found cheap software offerings that can mislead platforms like streaming services and social media networks in order to create smart bot accounts. In France, a group of independent music labels, collecting societies and producers are complaining to the government about “fake streams,” whereby tracks are shown to be played by bots, or real people hired to artificially boost views, benefiting the artist whose tracks are played.

Other fraudsters are developing AI tools to generate better fake "phishing" email content to trick people into handing over login credentials or banking information.





Change does not occur by reprogramming individuals but by influencing their culture.

https://www.bostonglobe.com/2021/05/31/opinion/algorithms-should-be-subject-continual-audits-weed-out-bias/

Algorithms should be subject to continual audits to weed out bias

It’s little wonder, as Kalinda Ukanwa asserts in “Algorithmic bias isn’t just unfair — it’s bad for business,” that artificial intelligence “simply [replicates] our existing prejudices.” After all, when it comes to bias, the weakest link in AI development is the human coder of those algorithms.

The designer carries along to the task his or her own baggage of predispositions that skew decision-making, often for the worst. No matter the care to make the algorithms bias-free, at least some of the designer’s prejudices insinuate their way into the product.

Biases’ roots run deep, but not all are mere spawn of the biases harbored by the developer. As Ukanwa details, in some cases, as part of machine learning, prejudicial decision-making stems from the AI recognizing patterns in data related to an institution’s past decisions and deciding that’s the model to replicate.



(Related) Wouldn’t you make images that reflect your biases?

https://venturebeat.com/2021/05/30/the-power-of-synthetic-images-to-train-ai-models/

The power of synthetic images to train AI models

Synthetic data is artificial data generated via computer program instead of real-world events. Ideally, synthetic data is created from a “seed” of real data — a few false positives and negatives, and a few true positives and negatives. Then those real pieces of data can be manipulated in various ways to create the synthetic dataset good enough and large enough to drive the creation of successful AI models.





I buy dozens of books at Library used book sales several times a year. Rather than just donate them back to my local library I might just try making some cash. (So I can buy more books.)

https://www.makeuseof.com/best-apps-for-buying-selling-preowned-books/

The 5 Best Apps for Buying and Selling Pre-Owned Books

These apps will help you make money selling books you no longer need, or snag a bargain when buying new titles.



No comments: