Cheerful news?
https://www.scientificamerican.com/article/chatbot-hallucinations-inevitable/
AI Chatbots Will Never Stop Hallucinating
… Hallucination is usually framed as a technical problem with AI—one that hardworking developers will eventually solve. But many machine-learning experts don’t view hallucination as fixable because it stems from LLMs doing exactly what they were developed and trained to do: respond, however they can, to user prompts. The real problem, according to some AI researchers, lies in our collective ideas about what these models are and how we’ve decided to use them. To mitigate hallucinations, the researchers say, generative AI tools must be paired with fact-checking systems that leave no chatbot unsupervised.
Should this have been published on April 1?
The fine art of human prompt engineering: How to talk to a person like ChatGPT
In a break from our normal practice, Ars is publishing this helpful guide to knowing how to prompt the "human brain," should you encounter one during your daily routine.
While AI assistants like ChatGPT have taken the world by storm, a growing body of research shows that it's also possible to generate useful outputs from what might be called "human language models," or people. Much like large language models (LLMs) in AI, HLMs have the ability to take information you provide and transform it into meaningful responses—if you know how to craft effective instructions, called "prompts."
Human prompt engineering is an ancient art form dating at least back to Aristotle's time, and it also became widely popular through books published in the modern era before the advent of computers.
Since interacting with humans can be difficult, we've put together a guide to a few key prompting techniques that will help you get the most out of conversations with human language models. But first, let's go over some of what HLMs can do.
No comments:
Post a Comment