Interesting targets for hackers, but likely as easy to create without the hassle of hacking.
Meta says it may stop development of AI systems it deems too risky
Meta CEO Mark Zuckerberg has pledged to make artificial general intelligence (AGI) — which is roughly defined as AI that can accomplish any task a human can — openly available one day. But in a new policy document, Meta suggests that there are certain scenarios in which it may not release a highly capable AI system it developed internally.
The document, which Meta is calling its Frontier AI Framework, identifies two types of AI systems the company considers too risky to release: “high risk” and “critical risk” systems.
As Meta defines them, both “high-risk” and “critical-risk” systems are capable of aiding in cybersecurity, chemical, and biological attacks, the difference being that “critical-risk” systems could result in a “catastrophic outcome [that] cannot be mitigated in [a] proposed deployment context.” High-risk systems, by contrast, might make an attack easier to carry out but not as reliably or dependably as a critical risk system.
No comments:
Post a Comment