Are hallucinations by AI worse
than hallucinations by humans?
https://www.bespacific.com/artificial-intelligence-and-constitutional-interpretation/
Artificial
Intelligence and Constitutional Interpretation
Coan,
Andrew and Surden, Harry, Artificial Intelligence and Constitutional
Interpretation (November 12, 2024). Arizona Legal Studies Discussion
Paper No. 24-30, U of Colorado
Law Legal Studies Research Paper No. 24-39, Available at
SSRN: https://ssrn.com/abstract=5018779
or http://dx.doi.org/10.2139/ssrn.5018779
This
Article examines the potential use of large language models (LLMs)
like ChatGPT in constitutional interpretation. LLMs are extremely
powerful tools, with significant potential to improve the quality and
efficiency of constitutional analysis. But their outputs are highly
sensitive to variations in prompts and counterarguments, illustrating
the importance of human framing choices. As a result, using LLMs for
constitutional interpretation implicates substantially the same
theoretical issues that confront human interpreters. Two key
implications emerge: First, it is crucial to attend carefully to
particular use cases and institutional contexts. Relatedly, judges
and lawyers must develop “AI literacy” to use LLMs responsibly.
Second, there is no avoiding the burdens of judgment. For any given
task, LLMs may be better or worse than humans, but the choice of
whether and how to use them is itself a judgment requiring normative
justification.
An
old complaint that has been solved by most organizations...
https://www.theregister.com/2024/11/20/data_is_the_new_uranium/
Data
is the new uranium – incredibly powerful and amazingly dangerous
CISOs
are quietly wishing they had less data, because the cost of
management sometimes exceeds its value
I
recently got to play a 'fly on the wall' at a roundtable of chief
information security officers. Beyond the expected griping and
moaning about funding shortfalls and always-too-gullible users, I
began to hear a new note: data
has become a problem.
A
generation ago we had hardly any data at all. In 2003 I took a tour
of a new all-digital 'library' – the Australian
Centre for the Moving Image (ACMI)
– and marveled at its single petabyte of online storage. I'd never
seen so much, and it pointed toward a future where we would all have
all the storage capacity we ever needed.
That
day arrived not many years later when Amazon's S3 quickly made scale
a non-issue. Today, plenty of enterprises manage multiple petabytes
of storage and we think nothing about moving a terabyte across the
network or generating a few gigabytes of new media during a working
day. Data is so common it has become nearly invisible.
Unless
you're a CISO. For them, more data means more problems, because it's
stored in so many systems. Most security execs know they have pools
of data all over the place, and that marketing departments have built
massive data-gathering and analytics engines into all customer-facing
systems, and acquire more data every day.
Keep
America stupid? Why not learn to use the new tools?
https://www.bostonglobe.com/2024/11/15/opinion/ai-classroom-teaching-writing/
AI
in the classroom could spare educators from having to teach writing
Of
all the skills I teach my high school students, I’ve always thought
writing was the most important — essential to their future academic
success, useful in any profession. I’m no longer so sure.
Thanks
to AI, writing’s place in the curriculum today is like that of
arithmetic at the dawn of cheap and widely available calculators.
The skills we currently think are essential — spelling,
punctuation, subject-predicate agreement — may soon become
superfluous, and schools will have to adapt.
… But
writing takes a lot of time to do well, and time is the most precious
resource in education. Longer writing assignments, like essays or
research papers, may no longer be the best use of it. In the
workplace, it is becoming increasingly common for AI to write the
first draft of any long-form document. More
than half of
professional workers used AI on the job in 2023, according to one
study, and of those who used AI, 68 percent were using it to draft
written content. Refining AI’s draft — making sure it conveys
what is intended — becomes the real work. From a business
perspective, this is an efficient division of labor: Humans come up
with the question, AI answers it, and humans polish the AI output.
In
schools, the same process is called cheating.
(Related)
https://techcrunch.com/2024/11/20/openai-releases-a-teachers-guide-to-chatgpt-but-some-educators-are-skeptical/
OpenAI
releases a teacher’s guide to ChatGPT, but some educators are
skeptical
OpenAI
envisions teachers using its AI-powered tools to create lesson plans
and interactive tutorials for students. But some educators are wary
of the technology — and its potential to go awry.
Today, OpenAI
released a free
online course designed
to help K-12 teachers learn how to bring ChatGPT,
the company’s AI chatbot platform, into their classrooms. Created
in collaboration with the nonprofit organization Common Sense Media,
with which OpenAI has an active partnership,
the one-hour, nine-module program covers the basics of AI and its
pedagogical applications.