Friday, May 28, 2021

I guess they haven’t gotten around to a “lessons learned” review of the last one.

https://www.nbcnews.com/tech/security/solarwinds-hackers-are-it-again-targeting-150-organizations-microsoft-warns-n1268893

SolarWinds hackers are at it again, targeting 150 organizations, Microsoft warns

The Russian-based group behind the SolarWinds hack has launched a new campaign that appears to target government agencies, think tanks and non-governmental organizations, Microsoft said Thursday.

Nobelium launched the current attacks after getting access to an email marketing service used by the United States Agency for International Development, or USAID, according to Microsoft.

"These attacks appear to be a continuation of multiple efforts by Nobelium to target government agencies involved in foreign policy as part of intelligence gathering efforts," Tom Burt, Microsoft vice president of customer security and trust, wrote in a blog post.





Oh, I feel safer already! It is so comforting to know that the agency that can’t secure a single point of entry at an airport will now protect thousands of miles of pipeline.

https://www.csoonline.com/article/3620300/tsa-s-pipeline-cybersecurity-directive-is-just-a-first-step-experts-say.html#tk.rss_all

TSA’s pipeline cybersecurity directive is just a first step experts say

The new, hastily announced security directive requires US pipeline companies to appoint a cybersecurity coordinator and report possible breaches within 12 hours.

The Transportation Safety Administration (TSA), an arm of the US Department of Homeland Security (DHS), released a Security Directive on Enhancing Pipeline Cybersecurity. TSA released the document two days after the Biden administration leaked the details of the regulations and less than a month after the ransomware attack on Colonial Pipeline created a significant gas shortage in the Southeast US.

As a result of post-9/11 government maneuvering, the TSA gained statutory authority to secure surface transportation and ensure pipeline safety. The directive follows largely ineffective, voluntary pipeline security guidelines established by the TSA in 2010 and updated in 2018.

This new regulation requires that designated pipeline security companies report cybersecurity incidents to the DHS's Cybersecurity and Infrastructure Security Agency (CISA ) no later than 12 hours after a cybersecurity incident is identified. The TSA estimates that about 100 companies in the US would fall under the directive's mandates.





Podcast.

https://whyy.org/episodes/the-promise-and-pitfalls-of-ai/

The Promise and Pitfalls of AI

On this episode, we hear from scientists and thinkers who argue that we should look at AI not as a threat or competition, but as an extension of our minds and abilities. They explain what AI is good at, and where humans have the upper hand. We look at AI in three different settings: medicine, work, and warfare, asking how it affects our present — and how it could shape our future.



(Related)

https://www.nydailynews.com/news/national/ny-microsoft-president-orwell-1984-brad-smith-ai-20210528-66btcxssgfczlhnz62neeojwwq-story.html

Microsoft president suggests George Orwell’s ’1984′ could happen by 2024 because of AI tech

Microsoft president Brad Smith is worried that the totalitarian surveillance famously seen in George Orwell’s novel “1984″ could exist in the real world soon because of artificial intelligence.

If we don’t enact the laws that will protect the public in the future, we are going to find the technology racing ahead, and it’s going to be very difficult to catch up,” Smith told the BBC.





My AI assures me that AIs would never do that.

https://www.globenewswire.com/news-release/2021/05/27/2237870/0/en/Litigating-Artificial-Intelligence-When-Does-AI-Violate-Our-Legal-Rights.html

Litigating Artificial Intelligence: When Does AI Violate Our Legal Rights?

From the minds of Canada’s leading law and technology experts comes a playbook for understanding the multi-faceted intersection of AI and the law

Emond Publishing, Canada’s leading independent legal publisher, today announced the release of Litigating Artificial Intelligence, a book examining AI-informed legal determinations, AI-based lawsuits, and AI-enabled litigation tools. Anchored by the expertise of general editors Jill R. Presser, Jesse Beatson, and Gerald Chan, this title offers practical insights regarding AI’s decision-making capabilities, position in evidence law and product-based lawsuits, role in automating legal work, and use by the courts, tribunals, and government agencies.





AI identifies targets as they ‘pop up.” Are they confirmed before they are attacked or is this a computer run war where humans are only tools?

https://www.jpost.com/arab-israeli-conflict/gaza-news/guardian-of-the-walls-the-first-ai-war-669371

Israel's operation against Hamas was the world's first AI war

Having relied heavily on machine learning, the Israeli military is calling Operation Guardian of the Walls the first artificial-intelligence war.

For the first time, artificial intelligence was a key component and power multiplier in fighting the enemy,” an IDF Intelligence Corps senior officer said. “This is a first-of-its-kind campaign for the IDF. We implemented new methods of operation and used technological developments that were a force multiplier for the entire IDF.”





My AI wants to point out that the tweets were not written by AI. (AI good, humans not so good.)

https://www.vox.com/recode/22455140/lemonade-insurance-ai-twitter

A disturbing, viral Twitter thread reveals how AI-powered insurance can go wrong

Lemonade, the fast-growing, machine learning-powered insurance app, put out a real lemon of a Twitter thread on Monday with a proud declaration that its AI analyzes videos of customers when determining if their claims are fraudulent. The company has been trying to explain itself and its business model — and fend off serious accusations of bias, discrimination, and general creepiness — ever since.

The prospect of being judged by AI for something as important as an insurance claim was alarming to many who saw the thread, and it should be. We’ve seen how AI can discriminate against certain races, genders, economic classes, and disabilities, among other categories, leading to those people being denied housing, jobs, education, or justice. Now we have an insurance company that prides itself on largely replacing human brokers and actuaries with bots and AI, collecting data about customers without them realizing they were giving it away, and using those data points to assess their risk.



No comments: