Perspective.
https://www.theregister.com/2025/06/06/schneier_doge_risks/
Schneier tries to rip the rose-colored AI glasses from the eyes of Congress
Security guru Bruce Schneier played the skunk at the garden party in a Thursday federal hearing on AI's use in the government, focusing on the risks many are ignoring.
"The other speakers mostly talked about how cool AI was – and sometimes about how cool their own company was – but I was asked by the Democrats to specifically talk about DOGE and the risks of exfiltrating our data from government agencies and feeding it into AIs," Schneier explained in a blog post.
... "You all need to assume that adversaries have copies of all the data DOGE has exfiltrated and has established access into all the networks that DOGE has removed security controls from," he said.
That data can be used against you, Schneier warned, suggesting that any military action against the US would be heralded by the zeroing-out of bank accounts for military and political leaders.
(Related)
https://www.schneier.com/blog/archives/2025/06/report-on-the-malicious-uses-of-ai.html
Report on the Malicious Uses of AI
OpenAI just published its annual report on malicious uses of AI.
By using AI as a force multiplier for our expert investigative teams, in the three months since our last report we’ve been able to detect, disrupt and expose abusive activity including social engineering, cyber espionage, deceptive employment schemes, covert influence operations and scams.
These operations originated in many parts of the world, acted in many different ways, and focused on many different targets. A significant number appeared to originate in China: Four of the 10 cases in this report, spanning social engineering, covert influence operations and cyber threats, likely had a Chinese origin. But we’ve disrupted abuses from many other countries too: this report includes case studies of a likely task scam from Cambodia, comment spamming apparently from the Philippines, covert influence attempts potentially linked with Russia and Iran, and deceptive employment schemes.
Reports like these give a brief window into the ways AI is being used by malicious actors around the world. I say “brief” because last year the models weren’t good enough for these sorts of things, and next year the threat actors will run their AI models locally—and we won’t have this kind of visibility.
Potential for harm?
https://www.kunc.org/news/2025-06-01/colorado-ag-warns-parents-about-ai-chatbots-that-can-harm-kids
Colorado AG warns parents about AI chatbots that can harm kids
Colorado Attorney General Phil Weiser issued a consumer alert warning parents about the growing risks posed by social AI chatbots. Chatbots are tools designed to mimic human conversation, which, in some cases, can lead young users into harmful interactions.
"These chatbots interact with people as if they were another person," Weiser said. "They can take on personas like a celebrity, fictional character or even a trusted adult, and the conversation can turn inappropriate or dangerous quickly, especially when it comes to sexual content, self-harm or substance use."
The alert, released May 21, comes amid a sharp rise in reports of children engaging with AI bots in ways that have resulted in mental health crises and unsafe behaviors. Weiser's office warns that children and teens may not realize they're interacting with an AI rather than a real person, making them more vulnerable to manipulation.
No comments:
Post a Comment