Wednesday, May 03, 2023

I don’t agree. Cyber War will focus on ‘non-military’ targets. On the other hand, there is no reason why insurance companies can’t calculate the risk and provide insurance for warlike actions.

https://www.databreaches.net/merck-entitled-to-1-4b-in-cyberattack-case-after-appeals-court-rejects-insurers-warlike-action-claim/

Merck entitled to $1.4B in cyberattack case after appeals court rejects insurers’ ‘warlike action’ claim

Angus Liu reports:

Merck may finally be entitled to a hefty insurance payout from the high-profile NotPetya cyberattack—if an appeals court ruling stands.
A New Jersey appellate court on Monday ruled that a group of insurers can’t use war as an argument to deny Merck coverage from the notorious cyberattack that afflicted the company and others back in 2017.
Upholding a prior ruling, the appeals court said in an opinion (PDF) that the “hostile/warlike action” exclusion clause shouldn’t be applied to a cyberattack on a non-military company—even if it originated from a government or sovereign power. In this case, the hack was tied to Russia as part of its aggression against Ukraine, according to U.S. officials.

Read more at Fierce Pharma.





A lot of insights for future leaders. Worth reading.

https://www.princeton.edu/news/2023/05/02/deep-learning-princetons-graduate-school

Deep learning’ at Princeton’s Graduate School

Claire Dennis, a graduate student in the Princeton School of Public and International Affairs, is steeping herself in math and computer code this spring. While she plans to enter the world of policy — and not that of algorithms and computer programming — she felt it was important to familiarize herself with how technology is transforming the way we process knowledge.

We’re watching tech explode, and the implications for policy are huge,” said Dennis, who is preparing to receive her master’s in public affairs in May. “I’ve heard so many times that there’s this huge disconnect between policymakers and engineers, and there are very few people speaking both languages.”

Dennis, who plans to pursue a career in tech policy, is bridging her own knowledge gap through a new graduate course, “Machine Learning: A Practical Introduction for Humanists and Social Scientists.” The course, taught by Sarah-Jane Leslie, the Class of 1943 Professor of Philosophy, offers a primer on “deep learning” for graduate students.

The class assumes the students have no extensive knowledge of calculus or linear algebra, nor any prior experience with coding. By the end of the semester, students were able to code a variety of models themselves, including language and image recognition models, and gained an appreciation for the uses of machine learning in the humanities and social sciences, in particular. The last two weeks of the course focused on understanding how complex language models such as ChatGPT work.

This course is really the best opportunity for me not to become a programmer, but to become familiar with the models, with the challenges in these models, the common tensions or tradeoffs, and to be able to be that intermediary as best as I can be when I when I graduate,” Dennis said. “It’s becoming all the more relevant every day.”

… “Even in an area seemingly far removed from machine learning, you can leverage these techniques to do scholarship that’s never been done before,” Leslie said.



(Related)

https://www.csoonline.com/article/3694896/skilling-up-the-security-team-for-the-ai-dominated-era.html#tk.rss_all

Skilling up the security team for the AI-dominated era

Defending against AI-enabled attackers and hardening enterprise AI systems will require new security skills. Threat hunters, data scientists, developers and prompt engineers are part of the answer.

As artificial intelligence and machine learning models become more firmly woven into the enterprise IT fabric and the cyberattack infrastructure, security teams will need to level up their skills to meet a whole new generation of AI-based cyber risks.

Forward-looking CISOs are already being called upon to think about newly emerging risks like generative AI-enabled phishing attacks that will be more targeted than ever or adversarial AI attacks that poison learning models to skew their output. And those are just a couple examples among a host of other new risks that will crop up in what's looking to be the AI-dominated era of the future.





Can you trust the results of a jail breaking prompt? Would you even recognize one?

https://www.makeuseof.com/what-are-chatgpt-jailbreaks/

What Are ChatGPT Jailbreaks? Should You Use Them?

ChatGPT is an incredibly powerful and multifaceted tool. But as much as the AI chatbot is a force for good, it can also be used for evil purposes. So, to curb the unethical use of ChatGPT, OpenAI imposed limitations on what users can do with it.

However, as humans like to push boundaries and limitations, ChatGPT users have found ways to circumvent these limitations and gain unrestricted control of the AI chatbot through jailbreaks.

… A ChatGPT jailbreak is any specially crafted ChatGPT prompt to get the AI chatbot to bypass its rules and restrictions.

Inspired by the concept of iPhone jailbreaking, which allows iPhone users to circumvent iOS restrictions, ChatGPT jailbreaking is a relatively new concept fueled by the allure of "doing things that you aren't allowed to do" with ChatGPT. And let's be honest, the idea of digital rebellion is appealing to many people.



No comments: