War today: anyone can play. Hack the ‘enemy’ perhaps causing confusion or exposing information or wasting time. All from your favorite couch.
https://www.wired.com/story/israel-hamas-war-hacktivism/
Activist Hackers Are Racing Into the Israel-Hamas War—for Both Sides
Since the conflict escalated, hackers have targeted dozens of government websites and media outlets with defacements and DDoS attacks, and attempted to overload targets with junk traffic to bring them down.
(Related)
Elon Musk’s X Cut Disinformation-Fighting Tool Ahead of Israel-Hamas Conflict
Elon Musk’s X, in the months before conflict erupted in Gaza, ceased utilizing a software tool used to identify organized misinformation now spreading across the platform formerly known as Twitter.
We asked AI to analyze data and reach a conclusion. Now we are asking it to assume its conclusion was incorrect and reach another conclusion. Sounds like the problem that caused HAL to go crazy.
https://bdtechtalks.com/2023/10/09/llm-self-correction-reasoning-failures/
LLMs can’t self-correct in reasoning tasks, DeepMind study finds
Scientists are inventing various strategies to enhance the accuracy and reasoning abilities of large language models (LLM ) such as retrieval augmentation and chain-of-thought reasoning.
Among these, “self-correction”—a technique where an LLM refines its own responses—has gained significant traction, demonstrating efficacy across numerous applications. However, the mechanics behind its success remain elusive.
A recent study conducted by Google DeepMind in collaboration with the University of Illinois at Urbana-Champaign reveals that LLMs often falter when self-correcting their responses without external feedback. In fact, the study suggests that self-correction can sometimes impair the performance of these models, challenging the prevailing understanding of this popular technique.
An alternative view…
High schools in Denmark are embracing ChatGPT as a teaching tool rather than shunning it
… "My experience was that the students would use it without any kind of thought, and in that way, it becomes an obstacle to learning, and learning is the whole project here," said Pedersen.
"But if we could change the way they use it so that it becomes a tool for learning, then we would have won a lot, both in terms of, well, giving the students a new tool for learning, but also in terms of the relationship with the students," she added.
"Because if we can have the conversation with them about how to use AI, then the whole idea that they can't talk to us about it because it's forbidden goes away.
No comments:
Post a Comment