Will this law cut out the obfuscation we see so often?
https://www.makeuseof.com/what-is-circia-and-how-does-cybersecurity-law-impact-you/
What Is CIRCIA and How Does This Cybersecurity Law Impact You?
… The Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA) is a federal law mandating “covered entities” that deal with critical infrastructure to report cyber incidents to the Cybersecurity and Infrastructure Security Agency (CISA).
If you encounter a cyberattack, you might want to share your experience with your security team or anyone else who can help prevent a recurrence. Until recently, sharing such information with a government agency was optional. CIRCIA now mandates organizations and chief information security officers (CISO) to report cyber incidents to CISA for a more secure cyber environment.
Signed into law by President Joe Biden in 2022, CIRCIA stipulates that you must report all cyber incidents not more than 72 hours after you become privy to them. Should you pay a ransom to attackers, you must report it within 24 hours.
Will Canada ban ChatGPT?
Canada opens investigation into AI firm behind ChatGPT
… The investigation by the Office of the Privacy Commissioner into OpenAI was opened in response to a "complaint alleging the collection, use and disclosure of personal information without consent," the agency said.
Perhaps I could sell a course on “talking to ChatGPT”
https://www.zdnet.com/article/do-you-like-asking-chatgpt-questions-you-could-get-paid-a-lot-for-it/
Do you like asking ChatGPT questions? You could get paid (a lot) for it
… If you have ever asked ChatGPT to help you with a task, you have written a ChatGPT prompt. Lucky for you, many companies are looking to hire people with that skill to optimize their company's AI usage and results. Most importantly, they are offering generous pay.
(Related) The course might even pay for itself!
https://www.theregister.com/2023/04/04/chatgpt_exfiltration_tool/
Can ChatGPT bash together some data-stealing code? With the right prompts, sure
A Forcepoint staffer has blogged about how he used ChatGPT to craft some code that exfiltrates data from an infected machine. At first, it sounds bad, but in reality, it's nothing an intermediate or keen beginner programmer couldn't whack together themselves anyway.
His experiment does, to some extent, highlight how the code-suggesting unreliable chatbot, built by OpenAI and pushed by Microsoft, could be used to cut some corners in malware development or automate the process.
It also shows how someone, potentially one without any coding experience, can make the bot jump its guardrails, which are supposed to prevent it from outputting potentially dangerous code, and have the AI service put together an undesirable program.
No comments:
Post a Comment