Friday, November 15, 2024

Useful tips…

https://www.zdnet.com/article/5-ways-to-catch-ai-in-its-lies-and-fact-check-its-outputs-for-your-research/

5 ways to catch AI in its lies and fact-check its outputs for your research

Sometimes, I think AI chatbots are modeled after teenagers. They can be very, very good. But other times, they tell lies. They make stuff up. They confabulate. They confidently give answers based on the assumption that they know everything there is to know, but they're woefully wrong.

Let's dig into five key steps you can take to guide an AI to accurate responses.





Perspective.

https://www.science.org/doi/10.1126/science.adt6140

The metaphors of artificial intelligence

A few months after ChatGPT was released, the neural network pioneer Terrence Sejnowski wrote about coming to grips with the shock of what large language models (LLMs) could do:

Something is beginning to happen that was not expected even a few years ago. A threshold was reached, as if a space alien suddenly appeared that could communicate with us in an eerily human way.… Some aspects of their behavior appear to be intelligent, but if it’s not human intelligence, what is the nature of their intelligence?”

What, indeed, is the nature of intelligence of LLMs and the artificial intelligence (AI) systems built on them? There is still no consensus on the answer. Many people view LLMs as analogous to an individual human mind (or perhaps, like Sejnowski, to that of a space alien)—a mind that can think, reason, explain itself, and perhaps have its own goals and intentions.

Others have proposed entirely different ways of conceptualizing these enormous neural networks: as role players that can imitate many different characters; as cultural technologies, akin to libraries and encyclopedias, that allow humans to efficiently access information created by other humans; as mirrors of human intelligence that “do not think for themselves [but instead] generate complex reflections cast by our recorded thoughts”; as blurry JPEGs of the Web that are approximate compressions of their training data; as stochastic parrots that work by “haphazardly stitching together sequences of linguistic forms…according to probabilistic information about how they combine, but without any reference to meaning”; and, most dismissively, as a kind of autocomplete on steroids.



No comments: