This non-lawyer thinks this has
merit.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5404770
Law
Proofing the Future
Lawmakers
today face continuous calls to "future proof" the legal
system against generative artificial intelligence, algorithmic
decision-making, targeted advertising, and all manner of emerging
technologies. This Article takes a contrarian stance: It is not the
law that needs bolstering for the future, but the
future that needs protection from the law. From the
printing press and the elevator to ChatGPT and online deepfakes, the
recurring historical pattern is familiar. Technological
breakthroughs provoke wonder, then fear, then legislation.
The resulting legal regimes entrench incumbents, suppress
experimentation, and displace long-standing legal principles with
bespoke but brittle rules. Drawing from history, economics,
political science, and legal theory, this Article argues that the
most powerful tools for governing technological change the
general-purpose tools of the common law-are in fact already on the
books, long predating the technologies they are now called upon to
govern, and ready also for whatever the future holds in store.
Rather
than proposing any new statute or regulatory initiative, this Article
offers something far rarer, a defense of doing less. It shows how
the law's virtues-generality, stability, and adaptability-are best
preserved not through prophylactic regulation, but through
accretional judicial decision-making. The epistemic limits that make
technological forecasting so unreliable and the hidden costs of early
legislative intervention, including biased governmental enforcement
and regulatory capture, mean that however
fast technology may move, the law must not chase it. The
case for legal restraint is thus not a defense of the status quo, but
a call to reserve the conditions of freedom and equal justice under
which both law and technology can evolve.
Why
not just say that encryption is good?
https://therecord.media/tech-companies-ftc-censorship-laws
US
warns tech companies against complying with European and British
‘censorship’ laws
U.S.
tech companies were warned on Thursday they could face action from
the Federal Trade Commission (FTC) for complying with the European
Union and United Kingdom’s regulations about the content shared on
their platforms.
Andrew
Ferguson, the Trump-appointed chairman of the FTC, wrote to
chief executives criticizing what he described as foreign attempts at
“censorship” and efforts to countermand the use of encryption to
protect American consumers’ data.
The letter
said that “censoring Americans to comply with a foreign power’s
laws” could be considered a violation of Section
5 of the Federal Trade Commission Act — the legislation
enforced by the FTC — which prohibits unfair or deceptive practices
in commerce.
Perspective.
https://www.livescience.com/technology/artificial-intelligence/there-are-32-different-ways-ai-can-go-rogue-scientists-say-from-hallucinating-answers-to-a-complete-misalignment-with-humanity
There
are 32 different ways AI can go rogue, scientists say — from
hallucinating answers to a complete misalignment with humanity
Scientists have suggested that
when artificial
intelligence (AI) goes rogue and starts to act in ways
counter to its intended purpose, it
exhibits behaviors that resemble psychopathologies in humans.
That's why they have created a new taxonomy of 32 AI dysfunctions so
people in a wide variety of fields can understand the risks of
building and deploying AI.
In
new research, the scientists set out to categorize the risks of AI in
straying from its intended path, drawing analogies with human
psychology. The result is "Psychopathia
Machinalis" — a framework designed to illuminate the
pathologies of AI, as well as how we can counter them. These
dysfunctions range from hallucinating answers to a complete
misalignment with human values and aims.