Perspective. AI is getting better at everything, including crime.
Anthropic warns of AI catastrophe if governments don't regulate in 18 months
Only days away from the US presidential election, AI company Anthropic is advocating for its own regulation -- before it's too late.
On Thursday, the company, which stands out in the industry for its focus on safety, released recommendations for governments to implement "targeted regulation" alongside potentially worrying data on the rise of what it calls "catastrophic" AI risks.
In a blog post, Anthropic noted how much progress AI models have made in coding and cyber offense in just one year. "On the SWE-bench software engineering task, models have improved from being able to solve 1.96% of a test set of real-world coding problems (Claude 2, October 2023) to 13.5% (Devin, March 2024) to 49% (Claude 3.5 Sonnet, October 2024)," the company wrote. "Internally, our Frontier Red Team has found that current models can already assist on a broad range of cyber offense-related tasks, and we expect that the next generation of models -- which will be able to plan over long, multi-step tasks -- will be even more effective."
No comments:
Post a Comment