As
I predicted!
https://bdtechtalks.com/2023/06/19/chatgpt-model-collapse/
ChatGPT
will make the web toxic for its successors
Generative
artificial intelligence has empowered everyone to be more creative.
Large language models (LLM
)
like ChatGPT can generate essays and articles with impressive
quality. Diffusion models such as Stable Diffusion and DALL-E
create
stunning images.
But
what happens when the internet becomes flooded with AI-generated
content?
That content will eventually be collected and used to train the next
iterations of generative models. According to a
study by
researchers at the University of Oxford, University of Cambridge,
Imperial College London, and the University of Toronto, machine
learning models trained on content generated by generative AI will
suffer from irreversible defects that gradually exacerbate across
generations.
The
only way to maintain the quality and integrity of future models is to
make sure they are trained on human-generated content. But with LLMs
such as ChatGPT
and
GPT-4 enabling the creation of content at scale, access to
human-created data might soon become a luxury that few can afford.
How
dare you tell us we’re lying!
https://www.nytimes.com/2023/06/19/technology/gop-disinformation-researchers-2024-election.html
G.O.P.
Targets Researchers Who Study Disinformation Ahead of 2024 Election
On
Capitol Hill and in the courts, Republican lawmakers and activists
are mounting a sweeping legal campaign against universities, think
tanks and private companies that study the spread of disinformation,
accusing them of colluding with the government to suppress
conservative speech online.
The
effort has encumbered its targets with expansive requests for
information and, in some cases, subpoenas — demanding notes, emails
and other information related to social media companies and the
government dating back to 2015. Complying has consumed time and
resources and already affected the groups’ ability to do research
and raise money, according to several people involved.
Sounds
like a rather serious hole in TSA “security.” (Yes, we just
ignore that high tech ID.)
https://coloradosun.com/2023/06/19/denver-airport-colorado-license-tsa-security-dia/
Got
a Colorado driver’s license? Expect to run into problems with TSA
at the airport.
… Dankers
said TSA couldn’t provide any specific detail about why their
system has issues with Colorado IDs or when the issue would be
resolved.
If
a traveler’s license is stopped by a TSA machine, however, they
need only show their boarding pass to be allowed through,
she said.
Did
anyone listen?
https://foreignpolicy.com/2023/06/19/ai-artificial-intelligence-national-security-foreign-policy-threats-prediction/
AI
Has Entered the Situation Room
At
the start of 2022, seasoned Russia experts and national security
hands in Washington watched in disbelief as Russian President
Vladimir Putin massed his armies on the borders of Ukraine. Was it
all a bluff to extract more concessions from Kyiv and the West, or
was he about to unleash a full-scale land war to redraw Europe’s
borders for the first time since World War II? The experts shook the
snow globe of their vast professional expertise, yet the debate over
Putin’s intentions never settled on a conclusion.
But
in Silicon Valley, we had already concluded that Putin would
invade—four months before the Russian attack. By the end of
January, we had predicted the start of the war almost to the day.
How?
Our team at Rhombus Power, made up largely of scientists, engineers,
national security experts, and former national security
practitioners, was looking
at a completely different picture than the traditional foreign-policy
community. Relying on artificial intelligence to sift
through almost inconceivable amounts of online and satellite data,
our machines were aggregating actions on the ground, counting inputs
that included movements at missile sites and local business
transactions, and building heat maps of Russian activity virtually in
real time.
We
got it right because we weren’t bound by the limitations of
traditional foreign-policy analysis. We weren’t trying to divine
Putin’s motivations, nor did we have to wrestle with our own biases
and assumptions trying to interpret his words. Instead, we were
watching what the Russians were actually doing by tracking often
small but highly important pieces of data that, when aggregated
effectively, became powerful predictors. All kinds of details caught
our attention: Weapons systems moved to the border regions in 2021
for what the Kremlin claimed were military drills were still there,
as if pre-positioned for future forward advances. Russian officers’
spending patterns at local businesses made it obvious they weren’t
planning on returning to barracks, let alone home, anytime soon. By
late October 2021, our machines were telling us that war was coming.
Perspective.
https://www.technologyreview.com/2023/06/20/1075075/metas-ai-leaders-want-you-to-know-fears-over-ai-existential-risk-are-ridiculous/
Meta’s
AI leaders want you to know fears over AI existential risk are
“ridiculous”
Plus:
Five big takeaways from Europe’s AI Act.
It’s a really weird time in AI. In
just six months, the public discourse around the technology has gone
from “Chatbots generate funny sea shanties” to “AI systems
could cause human extinction.” Who else is feeling whiplash?
My colleague Will Douglas Heaven
asked AI experts why exactly people are talking about existential
risk, and why now. Meredith Whittaker, president of the Signal
Foundation (which is behind the private messaging app Signal) and a
former Google researcher, sums it up nicely: “Ghost stories are
contagious. It’s really exciting and stimulating to be afraid.”