Interesting.
Perhaps AI is coming closer to its hype…
https://www.oneusefulthing.org/p/prophecies-of-the-flood
Prophecies
of the Flood
Recently,
something shifted in the AI industry. Researchers began speaking
urgently about the arrival of supersmart AI systems, a flood of
intelligence. Not in some distant future, but imminently. They
often refer to AGI - Artificial General Intelligence - defined,
albeit imprecisely, as machines that can outperform expert humans
across most intellectual tasks. This availability of intelligence on
demand will, they argue, change society deeply and will change it
soon.
There
are plenty of reasons to not believe insiders as they have clear
incentives to make bold predictions: they're raising capital,
boosting stock valuations, and perhaps convincing themselves of their
own historical importance. They're technologists, not prophets, and
the track record of technological predictions is littered with
confident declarations that turned out to be decades premature. Even
setting aside these human biases, the underlying technology itself
gives us reason for doubt. Today's Large Language Models, despite
their impressive capabilities, remain fundamentally inconsistent
tools - brilliant at some tasks while stumbling over seemingly
simpler ones. This “jagged frontier” is a core characteristic of
current AI systems, one that won't be easily smoothed away
Plus,
even assuming researchers are right about reaching AGI in the next
year or two, they are likely overestimating the speed at which humans
can adopt and adjust to a technology. Changes to organizations take
a long time. Changes to systems of work, life, and education, are
slower still. And technologies need to find specific uses that
matter in the world, which is itself a slow process. We could have
AGI right now and most people wouldn’t notice (indeed, some
observers have suggested that has already happened, arguing that the
latest AI models like Claude 3.5 are effectively AGI1).
New
technology, new crimes?
https://www.cambridge.org/core/journals/cambridge-forum-on-ai-law-and-governance/article/generative-ai-and-criminal-law/CFBB64250CAC6A338A5504F0F41C54AB
Generative
AI and criminal law
Several
criminal offenses can originate from or culminate with the creation
of content. Sexual abuse can be committed by producing intimate
materials without the subject’s consent, while incitement to
violence or self-harm can begin with a conversation. When the task
of generating content is entrusted to artificial intelligence (AI),
it becomes necessary to explore the risks of this technology. AI
changes criminal affordances because it creates new kinds of harmful
content, it amplifies the range of recipients, and it can exploit
cognitive vulnerabilities to manipulate user behavior. Given this
evolving landscape, the question is whether policies aimed at
fighting Generative AI-related harms should include criminal law.
The bulk of criminal law scholarship to date would not criminalize AI
harms on the theory that AI lacks moral agency. Even so, the field
of AI might need criminal law, precisely because it entails a moral
responsibility. When a serious harm occurs, responsibility needs to
be distributed considering the guilt of the agents involved, and, if
it is lacking, it needs to fall back because of their innocence.
Thus, legal systems need to start exploring whether and how guilt can
be preserved when the actus reus is completely or partially delegated
to Generative AI.
Some
good bad examples?
https://commons.allard.ubc.ca/fac_pubs/2793/
Artificial
Intelligence & Criminal Justice: Cases and Commentary
When
I was given the chance to develop a seminar this year at UBC’s
Peter A. Allard School of Law, I jumped at the opportunity to develop
something new and engaging. After brainstorming ideas with students,
it quickly became evident that there was substantial interest and
enthusiasm for a seminar on the growing integration of artificial
intelligence and the criminal justice system.
Embarking
on this journey has been a steep learning curve for me as my students
and I worked together to shape the course along with input from
generative AI tools like ChatGPT, Gemini and Perplexity, along with
open-source materials from the Canadian Legal Information Institute
and the Creative Commons search portal.
Delving
into the case law in Canada and the U.S., reading the critical
commentary, listening to podcasts and webinars, and playing around
with the latest AI tools has been a lot of fun, but also made me
realize how crucial it is at this point in time to have a focussed
critical exploration of the benefits and risks of AI in the criminal
justice context.
I
hope that this open access casebook will be a valuable resource for
students, instructors, legal practitioners and the public, offering
insights into how AI is already influencing various aspects of the
criminal justice lifecycle – including criminality and
victimization, access to justice, policing, lawyering, adjudication,
and corrections. If you’re interested in a quick overview of
topics covered in this casebook, you can download the companion:
Artificial Intelligence & Criminal Justice: A Primer
2024).
Attempts
to be ethical.
https://www.researchgate.net/profile/Robert-Smith-169/publication/387723862_The_Top_10_AI_Ethics_Frameworks_Shaping_the_Future_of_Artificial_Intelligence/links/67795c65894c55208542eda3/The-Top-10-AI-Ethics-Frameworks-Shaping-the-Future-of-Artificial-Intelligence.pdf
The
Top 10 AI Ethics Frameworks: Shaping the Future of Artificial
Intelligence
The
rapid advancement of artificial intelligence (AI) has created
unprecedented opportunities and challenges, particularly in
addressing ethical concerns surrounding its deployment. At the
center of these discussions is the dual focus on enforcing ethical
principles through robust regulation and embedding ethics as an
intrinsic aspect of AI development. This article critically examines
the top 10 AI ethics frameworks, each offering unique principles and
guidelines to ensure AI's responsible and equitable impact on
society. The frameworks explored range from regulatory models and
philosophical paradigms to practical governance structures,
reflecting the global effort to align AI innovation with the values
of fairness, accountability, transparency, and societal benefit. By
analysing their contributions, implications, and limitations, this
article provides a comprehensive overview of humanity’s collective
endeavour to navigate the ethical complexities of AI and foster
technologies that prioritize inclusivity, sustainability, and
well-being.
Another
opinion…
https://azure.microsoft.com/en-us/blog/explore-the-business-case-for-responsible-ai-in-new-idc-whitepaper/
Explore
the business case for responsible AI in new IDC whitepaper
I
am pleased to introduce Microsoft’s commissioned whitepaper with
IDC: The
Business Case for Responsible AI.
This whitepaper, based on IDC’s Worldwide Responsible AI Survey
sponsored by Microsoft, offers guidance to business and technology
leaders on how to systematically build trustworthy AI. In today’s
rapidly evolving technological landscape, AI has emerged as a
transformative force, reshaping industries and redefining the way
businesses operate. Generative AI usage jumped from 55% in 2023 to
75% in 2024; the potential for AI to drive innovation and enhance
operational efficiency is undeniable.1 However, with
great power comes great responsibility.
The deployment of AI technologies also brings with it significant
risks and challenges that must be addressed to ensure responsible
use.