I think thinking is worth thinking about. (AI looks at what a lot of someones have already thought.)
https://www.bespacific.com/the-impact-of-generative-ai-on-critical-thinking/
The Impact of Generative AI on Critical Thinking
“A new paper The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers from researchers at Microsoft and Carnegie Mellon University finds that as humans increasingly rely on generative AI in their work, they use less critical thinking, which can “result in the deterioration of cognitive faculties that ought to be preserved.” “[A] key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise,” the researchers wrote.”
The rise of Generative AI (GenAI) in knowledge workflows raises questions about its impact on critical thinking skills and practices. We survey 319 knowledge workers to investigate 1) when and how they perceive the enaction of critical thinking when using GenAI, and 2) when and why GenAI affects their effort to do so. Participants shared 936 first-hand examples of using GenAI in work tasks. Quantitatively, when considering both task- and user-specific factors, a user’s task-specific self-confidence and confidence in GenAI are predictive of whether critical thinking is enacted and the effort of doing so in GenAI-assisted tasks. Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship. Our insights reveal new design challenges and opportunities for developing GenAI tools for knowledge work..”
Probably not a general guide to copyright, but suggestive?
This Company Got a Copyright for an Image Made Entirely With AI. Here's How
The image, called "A Single Piece of American Cheese," was created using Invoke's AI editing platform.
Perspective.
The Double-Edged Sword of Artificial Intelligence
Each new iteration of a large language model (LLM) feels like a step forward—better at understanding nuanced questions, more capable of providing detailed answers, and increasingly adept at sounding, well, human. These advancements are celebrated as breakthroughs in artificial intelligence (AI), and for good reason.
But we also have to remember that LLMs themselves are just tools trained by humans, regardless of how sophisticated they get. They cannot evaluate the truth of the responses they produce. As I’ve argued in the past, their responses are nothing but bullsh*t, which is information that is communicated with little regard for its accuracy. And a recent study by Zhou et al. (2024) suggests that as LLMs get more sophisticated, they may also get better at giving us plausible-sounding incorrect answers. In other words, as these systems become more educated, they also become better bullsh*tters.
No comments:
Post a Comment