Tuesday, April 11, 2023

Always worth comparing. Did you forget something? Does this guide explain something better?

https://thehackernews.com/2023/04/ebook-step-by-step-guide-to-cyber-risk.html

[eBook] A Step-by-Step Guide to Cyber Risk Assessment

According to the guide, an effective cyber risk assessment includes these five steps:

  1. Understand the organization's security posture and compliance requirements

  2. Identify threats

  3. Identify vulnerabilities and map attack routes

  4. Model the consequences of attacks

  5. Prioritize mitigation options





Sometimes, knowing where we have been helps explain where we are going.

https://www.makeuseof.com/gpt-models-explained-and-compared/

GPT-1 to GPT-4: Each of OpenAI's GPT Models Explained and Compared

GPT models are revolutionizing natural language processing and transforming AI, so let's explore their evolution, strengths, and limitations.





Tools & Techniques. An important new skill.

https://www.zdnet.com/article/how-to-write-better-chatgpt-prompts/

How to write better ChatGPT prompts (and this applies to most other text-based AIs, too)

… no matter how good your prompts are, there's always the possibility that the AI will simply make stuff up. That said, there's a lot you can do when crafting prompts to ensure the best possible outcome. That's what we'll be exploring in this how-to.



(Related)

https://www.bespacific.com/we-need-to-tell-people-chatgpt-will-lie-to-them-not-debate-linguistics/

We need to tell people ChatGPT will lie to them, not debate linguistics

Simon Willison: ChatGPT lies to people. “This is a serious bug that has so far resisted all attempts at a fix. We need to prioritize helping people understand this, not debating the most precise terminology to use to describe it. We accidentally invented computers that can lie to us I tweeted (and tooted ) this: We accidentally invented computers that can lie to us and we can’t figure out how to make them stop – Simon Willison (@simonw) April 5, 2023 Mainly I was trying to be pithy and amusing, but this thought was inspired by reading Sam Bowman’s excellent review of the field, Eight Things to Know about Large Language Models. In particular this:

More capable models can better recognize the specific circumstances under which they are trained. Because of this, they are more likely to learn to act as expected in precisely those circumstances while behaving competently but unexpectedly in others. This can surface in the form of problems that Perez et al. (2022) call sycophancy, where a model answers subjective questions in a way that flatters their user’s stated beliefs, and sandbagging, where models are more likely to endorse common misconceptions when their user appears to be less educated…”



No comments: