Who benefits?
https://www.bbc.com/news/articles/cn4w0d8zz22o
Secret
Service disrupts telecom threat near UN General Assembly
The
US Secret Service disrupted a network of telecommunications devices
that could have shut down cellular systems as leaders gather for the
United Nations General Assembly in New York City.
The
agency said on Tuesday that last month it found more than 300 SIM
servers and 100,000 SIM cards that could have been used for telecom
attacks within the area encompassing parts of New York, New Jersey
and Connecticut.
"This
network had the power to disable cell phone towers and essentially
shut down the cellular network in New York City," said special
agent in charge Matt McCool.
… The
unidentified nation-state actors were sending encrypted messages to
organised crime groups, cartels and terrorist organisations, he
added.
The
equipment was capable of texting the entire population of the US
within 12 minutes, officials say. It could also have disabled mobile
phone towers and launched distributed denial of service attacks that
might have blocked emergency dispatch communications.
I’ve
been waiting for this. It seemed inevitable after reading “The
dynamo and the computer.”
https://gizmodo.com/study-claims-over-half-of-tech-firms-are-considering-restructuring-thanks-to-ai-2000659089
Study
Claims Over Half of Tech Firms Are Considering ‘Restructuring,’
Thanks to AI
Murmurs
about a linkage between the rollout of new AI services and recent
waves of layoffs within the tech industry have been ongoing for some
time. Similarly, a recent cooling of the job market for coders has
also been attributed to the rise of so-called “vibe coding,” in
which less skilled technicians create websites and products with the
help of an automated assistant.
Now,
a new report from a firm that works with tech companies claims that a
majority of its clients say they are considering big changes to
accommodate greater integration of AI.
The report comes
from Source, a consultant that provides services to media, tech, and
telecom firms. The company found that some 55 percent of its clients
expect to invest in organizational restructuring during the next 18
months. The report seems to attribute these changes to AI:
What
could be worse than bogus citations?
https://www.bespacific.com/ai-models-are-using-material-from-retracted-scientific-papers/
AI
models are using material from retracted scientific papers
MIT
Technology Review:
“Some AI chatbots rely on flawed research from retracted
scientific papers to answer questions, according to recent studies.
The findings, confirmed by MIT
Technology Review,
raise questions about how reliable AI tools are at evaluating
scientific research and could complicate efforts by countries and
industries seeking to invest in AI tools for scientists. AI search
tools and chatbots are already known to fabricate
links and
references. But answers based on the material from actual papers can
mislead as well if those papers have been retracted. The chatbot is
“using a real paper, real material, to tell you something,” says
Weikuan Gu, a medical researcher at the University of Tennessee in
Memphis and an author of one
of the recent studies.
But, he says, if people only look at the content of the answer and
do not click through to the paper and see that it’s been retracted,
that’s really a problem. Gu and his team asked OpenAI’s ChatGPT,
running on the GPT-4o model, questions based on information from 21
retracted papers about medical imaging. The chatbot’s answers
referenced retracted papers in five cases but advised caution in only
three. While it cited non-retracted papers for other questions, the
authors note that it may not have recognized the retraction status of
the articles. In a study
from August,
a different group of researchers used ChatGPT-4o mini to evaluate the
quality of 217 retracted and low-quality papers from different
scientific fields; they found that none of the chatbot’s responses
mentioned retractions or other concerns. (No similar studies have
been released on GPT-5, which came out in August.)
“If
[a tool is] facing the general public, then using retraction as a
kind of quality indicator is very important,” says Yuanxi Fu, an
information science researcher at the University of Illinois
Urbana-Champaign. There’s “kind of an agreement that retracted
papers have been struck off the record of science,” she says, “and
the people who are outside of science—they should be warned that
these are retracted papers.” OpenAI did not provide a response to
a request for comment about the paper results. The problem is not
limited to ChatGPT. In June, MIT
Technology Review tested
AI tools specifically advertised for research work, such as Elicit,
Ai2 ScholarQA (now part of the Allen Institute for Artificial
Intelligence’s Asta tool), Perplexity, and Consensus, using
questions based on the 21 retracted papers in Gu’s study. Elicit
referenced five of the retracted papers in its answers, while Ai2
ScholarQA referenced 17, Perplexity 11, and Consensus 18—all
without noting the retractions. Some companies have since made
moves to correct the issue. “Until recently, we didn’t have
great retraction data in our search engine,” says Christian Salem,
cofounder of Consensus. His company has now started using
retraction data from a combination of sources, including publishers
and data aggregators, independent web crawling, and Retraction
Watch,
which manually curates and maintains a
database of
retractions. In a test of the same papers in August, Consensus
cited only five retracted papers…”