It’s not just lazy lawyers…
https://www.bespacific.com/computer-science-papers-rife-with-ai/
Computer science papers rife with AI
Semefor: “The largest pre-publication repository of scientific studies announced it would no longer accept computer science review papers because of the rise of fake AI-generated content. arXiv is a preprint site, which accepts papers before peer review with minimal moderation. It allows wider access to research that is usually behind paywalls, and vastly speeds up publication times, although it has fewer quality controls. But the percentage of papers rejected has recently shot up. In a blog post, arXiv said it had seen an “unmanageable influx” of papers, many AI-generated, and the problem was especially pronounced in computer science. Recent research suggested that up to 22% of all CS papers might contain some AI-generated content.”
Isn’t there an automatic freeze on potential evidence? (Was thirty days sufficient for all police purposes?)
Judge Rules Flock Surveillance Images Are Public Records That Can Be Requested By Anyone
404 Media: “A judge in Washington has ruled that police images taken by Flock’s AI license plate-scanning cameras are public records that can be requested as part of normal public records requests. The decision highlights the sheer volume of the technology-fueled surveillance state in the United States, and shows that at least in some cases, police cannot withhold the data collected by its surveillance systems. In a ruling last week, Judge Elizabeth Neidzwski ruled that “the Flock images generated by the Flock cameras located in Stanwood and Sedro-Wooley [Washington] are public records under the Washington State Public Records Act,” that they are “not exempt from disclosure,” and that “an agency does not have to possess a record for that record to be subject to the Public Records Act.” She further found that “Flock camera images are created and used to further a governmental purpose” and that the images on them are public records because they were paid for by taxpayers. Despite this, the records that were requested as part of the case will not be released because the city automatically deleted them after 30 days. Local media in Washington first reported on the case; 404 Media bought Washington State court records to report the specifics of the case in more detail…
Learning to be dumber?
https://www.zdnet.com/article/does-your-chatbot-have-brain-rot-4-ways-to-tell/
Does your chatbot have 'brain rot'? 4 ways to tell
… Last month, a team of AI researchers from the University of Texas at Austin, Texas A&M, and Purdue University published a paper advancing what they call "the LLM Brain Rot Hypothesis" -- basically, that the output of AI chatbots like ChatGPT, Gemini, Claude, and Grok will degrade the more they're exposed to "junk data" found on social media.
You mean it might not be all good?
https://www.businessinsider.com/companies-are-warning-about-risks-of-ai-sec-filings-2025-11
The new AI warnings popping up in SEC filings
An increasing share of companies' annual filings with the Securities and Exchange Commission now caution investors that the technology could have a significant negative impact on their businesses.
So far this year, 418 publicly traded companies valued at more than $1 billion have cited AI-related risk factors associated with reputational harm in those reports, according to an analysis conducted with AlphaSense. That is a 46% jump from 2024 and roughly nine times greater than in 2023.
AI datasets could hurt a company's image, the filings say, by producing biased or incorrect information, compromising security, or infringing on others' rights.
No comments:
Post a Comment