Snapping up children is easier?
ICE can now enter K-12 schools − here’s what educators should know about student rights and privacy
United States federal agents tried to enter two Los Angeles elementary schools on April 7, 2025, and were denied entry, according to the Los Angeles Times. The agents were apparently seeking contact with five students who had allegedly entered the country without authorization.
The Trump administration has been targeting foreign-born college students and professors for deportation since February 2025. This was the first known attempt to target younger students since the U.S. Department of Homeland Security in January rescinded a 2011 policy that had limited immigration enforcement actions in locations deemed sensitive by the government such as hospitals, churches and schools.
“Criminals will no longer be able to hide in America’s schools and churches to avoid arrest,” the department said on Jan. 21, 2025.
What else might be ‘verified’ by a facial scan?
https://www.bbc.com/news/articles/cjr75wypg0vo
Discord's face scanning age checks 'start of a bigger shift'
Discord is testing face scanning to verify some users' ages in the UK and Australia.
The social platform, which says it has over 200 million monthly users around the world, was initially used by gamers but now has communities on a wide range of topics including pornography.
The UK's online safety laws mean platforms with adult content will need to have "robust" age verification in place by July.
And social media expert Matt Navarra told the BBC "this isn't a one-off - it's the start of a bigger shift".
"Regulators want real proof, and facial recognition might be the fastest route there," he said.
"So let it be written, so let it be done" (It’s like an AI commandment.)
https://gizmodo.com/a-scanning-error-created-a-fake-science-term-now-ai-wont-let-it-die-2000590659
A Scanning Error Created a Fake Science Term—Now AI Won’t Let It Die
AI trawling the internet’s vast repository of journal articles has reproduced an error that’s made its way into dozens of research papers—and now a team of researchers has found the source of the issue.
It’s the question on the tip of everyone’s tongues: What the hell is “vegetative electron microscopy”? As it turns out, the term is nonsensical.
It sounds technical—maybe even credible—but it’s complete nonsense. And yet, it’s turning up in scientific papers, AI responses, and even peer-reviewed journals. So… how did this phantom phrase become part of our collective knowledge?
As painstakingly reported by Retraction Watch in February, the term may have been pulled from parallel columns of text in a 1959 paper on bacterial cell walls. The AI seemed to have jumped the columns, reading two unrelated lines of text as one contiguous sentence, according to one investigator.
(Related)
https://www.bespacific.com/russia-seeds-chatbots-with-lies-any-bad-actor-could-game-ai-the-same-way/
Russia seeds chatbots with lies. Any bad actor could game AI the same way.
Washington Post [no paywall]: “Russia is automating the spread of false information to fool artificial intelligence chatbots on key topics, offering a playbook to other bad actors on how to game AI to push content meant to inflame, influence and obfuscate instead of inform. Experts warn the problem is worsening as more people rely on chatbots rushed to market, social media companies cut back on moderation and the Trump administration disbands government teams fighting disinformation. Earlier this year, when researchers asked 10 leading chatbots about topics targeted by false Russian messaging, such as the claim that the United States was making bioweapons in Ukraine, a third of the responses repeated those lies. Moscow’s propaganda inroads highlight a fundamental weakness of the AI industry: Chatbot answers depend on the data fed into them. A guiding principle is that the more the chatbots read, the more informed their answers will be, which is why the industry is ravenous for content. But mass quantities of well-aimed chaff can skew the answers on specific topics. For Russia, that is the war in Ukraine. But for a politician, it could be an opponent; for a commercial firm, it could be a competitor…”
Perspective.
An epistemic solution to do away with our illusion of AI objectivity
AI-generated output has the potential to be much more than answers if it includes a summary of relevant sources, indicators of disagreement between them and a confidence score based on source credibility. Jakub Drábik calls this epistemic responsibility. He writes that AI tools don’t always have to be right, but they must be more transparent, explaining why they think they are right and what might make them wrong.
… As a historian working with textual sources, I am trained to ask not only what a statement claims, but where it comes from, why it exists and how it can be evaluated. In recent months, I have found myself turning to LLMs to assist in my work—only to encounter answers that sound right but cannot be traced, supported or interrogated. When I ask for references, I am given hallucinations. When I ask for balance, I am given rhetoric. When I ask for method, I get metaphor. No matter how careful the prompt, the system has no true way to check its own statements.
This is not a matter of prompting technique. It is structural. Today’s AI systems are trained on vast amounts of language, but not on knowledge in the epistemic sense: grounded, verifiable, source-aware and accountable. What results is a surface of plausibility without a spine of justification.
No comments:
Post a Comment