Perspective.
https://www.bespacific.com/artificial-intelligence-and-the-future-of-work/
Artificial Intelligence and the Future of Work
National Academies of Sciences, Engineering, and Medicine. 2025. Artificial Intelligence and the Future of Work. Washington, DC: The National Academies Press. Advances in artificial intelligence (AI) promise to improve productivity significantly, but there are many questions about how AI could affect jobs and workers. Recent technical innovations have driven the rapid development of generative AI systems, which produce text, images, or other content based on user requests – advances which have the potential to complement or replace human labor in specific tasks, and to reshape demand for certain types of expertise in the labor market. Artificial Intelligence and the Future of Work evaluates recent advances in AI technology and their implications for economic productivity, the workforce, and education in the United States. The report notes that AI is a tool with the potential to enhance human labor and create new forms of valuable work – but this is not an inevitable outcome. Tracking progress in AI and its impacts on the workforce will be critical to helping inform and equip workers and policymakers to flexibly respond to AI developments.
Perhaps not so smart after all.
https://www.schneier.com/blog/archives/2025/12/ais-exploiting-smart-contracts.html
AIs Exploiting Smart Contracts
I have long maintained that smart contracts are a dumb idea: that a human process is actually a security feature.
Here’s some interesting research on training AIs to automatically exploit smart contracts:
AI models are increasingly good at cyber tasks, as we’ve written about before. But what is the economic impact of these capabilities? In a recent MATS and Anthropic Fellows project, our scholars investigated this question by evaluating AI agents’ ability to exploit smart contracts on Smart CONtracts Exploitation benchmark (SCONE-bench) a new benchmark they built comprising 405 contracts that were actually exploited between 2020 and 2025. On contracts exploited after the latest knowledge cutoffs (June 2025 for Opus 4.5 and March 2025 for other models), Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 developed exploits collectively worth $4.6 million, establishing a concrete lower bound for the economic harm these capabilities could enable. Going beyond retrospective analysis, we evaluated both Sonnet 4.5 and GPT-5 in simulation against 2,849 recently deployed contracts without any known vulnerabilities. Both agents uncovered two novel zero-day vulnerabilities and produced exploits worth $3,694, with GPT-5 doing so at an API cost of $3,476. This demonstrates as a proof-of-concept that profitable, real-world autonomous exploitation is technically feasible, a finding that underscores the need for proactive adoption of AI for defense.
How to violate Trumps Executive Order…
https://www.transformernews.ai/p/new-york-governor-hochul-raise-act-sb-53
New York’s governor is trying to turn the RAISE Act into an SB 53 copycat
New York Governor Kathy Hochul is proposing a dramatic rewrite of the RAISE Act, the AI transparency and safety bill that recently passed the state legislature, according to two sources who reviewed the governor’s redlines on the bill.
The governor’s proposal would strike the RAISE Act in its entirety and replace it with verbatim language from California’s recently enacted law, SB 53, with minimal changes. SB 53 is generally viewed as a lighter touch approach. One source who spoke with Transformer on the condition of anonymity said the proposal would effectively make SB 53, a law that “was always meant to be a floor” for AI regulation, “suddenly become the ceiling.”