Saturday, December 11, 2021

Now we’re getting serious. Nothing is worse than a bagel without its schmear! (Perhaps they need better lox?)

https://gizmodo.com/ransomware-jerks-helped-cause-the-cream-cheese-shortage-1848195368

Ransomware Jerks Helped Cause the Cream Cheese Shortage

Following attacks on our hospitals, municipal governments, and fuel supplies, hackers have finally gone too far: They fucked with America’s cream cheese.

There’s been a serious shortage of cream cheese in recent weeks—one of the many seemingly random products that have come into short supply amid widespread supply chain disruption and labor shortages. According to Bloomberg, in this instance, hackers played a role. In mid-October, cheese giant Schreiber Foods (which has a cream cheese unit comparable to industry leader Kraft’s) was forced to close for several days due to a cyber attack. The hack coincided with the annual height of the U.S. cream cheese season—think cheesecakes—on top of demand that was already high due to workers remaining home during the pandemic, Bloomberg wrote.



Less serious? Only life or death…

https://www.zdnet.com/article/brazilian-ministry-of-health-suffers-cyberattack-and-covid-19-vaccination-data-vanishes/

Brazilian Ministry of Health suffers cyberattack and COVID-19 vaccination data vanishes

Hackers claimed to have copied and deleted 50 TB worth of data from internal systems.



Update.

https://www.cpomagazine.com/cyber-security/colorado-energy-company-suffered-a-cyber-attack-destroying-25-years-of-data-and-shut-down-internal-controls/

Colorado Energy Company Suffered a Cyber Attack Destroying 25 Years of Data and Shut Down Internal Controls

Delta-Montrose Electric Association (DMEA) suffered a malicious cyber attack that shut down 90% of its internal controls and wiped 25 years of historical data.

DMEA says the cyber attack started on November 7 before spreading and affecting internal systems, support systems, payment processing tools, billing platforms, and other customer-facing tools.

… “By the way, a large percentage of the smaller, distribution-level electric cooperatives are immune from cyber-attack since they don’t use automation for their operational technology.”

Lawrence, however, noted that the energy company failed to officially report the cyber attack as a ransomware incident despite the evidence. Ransomware attacks cause reputational damage to the victims, and many are hesitant to admit experiencing them.



Podcast. (40 min.) Can federal agencies have ‘proprietary’ data?

https://governmentciomedia.com/ai-and-predictive-analytics

AI and Predictive Analytics

Newfound AI capacities have allowed federal agencies to leverage proprietary data towards predictive modeling, allowing them to more effectively deliver services and act upon their core mission. Hear from AI specialists on how their agencies are leveraging data to create models that further essential research and analysis.



Are we relying on high school level AI?

https://futurism.com/deepmind-ai-reading-comprehension

DeepMind Says Its New AI Has Almost the Reading Comprehension of a High Schooler

Alphabet’s AI research company DeepMind has released the next generation of its language model, and it says that it has close to the reading comprehension of a high schooler — a startling claim.

Such a system could allow us to “safely and efficiently to summarize information, provide expert advice and follow instructions via natural language,” according to a statement.



What does AI know about the future of AI? The AI actually argued both sides...

https://theconversation.com/we-invited-an-ai-to-debate-its-own-ethics-in-the-oxford-union-what-it-said-was-startling-173607

We invited an AI to debate its own ethics in the Oxford Union – what it said was startling

It’s natural, therefore, that we would include the ethics of AI in our postgraduate Diploma in Artificial Intelligence for Business at Oxford’s Said Business School. In its first year, we’ve done sessions on everything from the AI-driven automated stock trading systems in Singapore, to the limits of facial recognition in US policing.

We recently finished the course with a debate at the celebrated Oxford Union, crucible of great debaters like William Gladstone, Robin Day, Benazir Bhutto, Denis Healey and Tariq Ali. Along with the students, we allowed an actual AI to contribute.

It was the Megatron Transformer, developed by the Applied Deep Research team at computer-chip maker Nvidia, and based on earlier work by Google. Like many supervised learning tools, it is trained on real-world data – in this case, the whole of Wikipedia (in English), 63 million English news articles from 2016-19, 38 gigabytes worth of Reddit discourse (which must be a pretty depressing read), and a huge number of creative commons sources.

In other words, the Megatron is trained on more written material than any of us could reasonably expect to digest in a lifetime. After such extensive research, it forms its own views.

The debate topic was: “This house believes that AI will never be ethical.” To proposers of the notion, we added the Megatron – and it said something fascinating:

AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral … In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI.


(Related) A human view.

https://www.healthcareitnews.com/news/uc-berkeleys-ziad-obermeyer-optimistic-about-algorithms

UC Berkeley's Ziad Obermeyer is optimistic about algorithms

As an associate professor at University of California, Berkeley, Dr. Ziad Obermeyer has made waves throughout the healthcare informatics industry with his work on machine learning, public policy and computational medicine.

In 2019, he was the lead author on a paper published in Science showing that a widely used population health algorithm exhibits significant racial bias.

In recent years, the subject of identifying and confronting bias in machine learning has continued to emerge in healthcare spaces.

Obermeyer, who will present at the HIMSS Machine Learning for AI and Healthcare event next week – alongside Michigan State University Assistant Professor Mohammad Ghassemi, Virginia Commonwealth University Assistant Professor Shannon Harris and HIMSS Outside Counsel Karen Silverman – sat down with Healthcare IT News to discuss how stakeholders can take bias into consideration when developing algorithms and why he feels optimistic about artificial intelligence.


No comments: