Thursday, July 20, 2023

I think ChatGPT is being contaminated by the articles it generates.

https://venturebeat.com/ai/not-just-in-your-head-chatgpts-behavior-is-changing-say-ai-researchers/

Not just in your head: ChatGPT’s behavior is changing, say AI researchers

Researchers at Stanford University and University of California-Berkeley have published an unreviewed paper on the open access journal arXiv.org, which found that the “performance and behavior” of OpenAI’s ChatGPT large language models (LLMs) have changed between March and June 2023. The researchers concluded that their tests revealed “performance on some tasks have gotten substantially worse over time.”

Commenters on the ChatGPT subreddit and Ycombinator similarly took issue with the thresholds the researchers considered failing, but other longtime users seemed to be comforted by evidence that perceived changes in the generative AI output weren’t merely in their heads.

This work brings to light a new area that business and enterprise operators need to be aware of when considering generative AI products. The researchers have dubbed the change in behavior as “LLM drift” and cited it as a critical way to comprehend how to interpret results from popular chat AI models.





This might be useful to generate phishing examples, on the other hand it might be good enough to turn the security staff into criminals.

https://nypost.com/2023/07/19/chatgpts-evil-twin-wormgpt-is-secretly-entering-emails-raiding-banks/

ChatGPT’s evil twin WormGPT is secretly entering emails, raiding banks

ChatGPT has an evil twin — and it wants to take your money.

WormGPT was created by a hacker and is designed for phishing attacks on a larger scale than ever before.

Cybersecurity firm SlashNext confirmed that the “sophisticated AI model” was developed purely with malevolent intent.

This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities,” security researcher Daniel Kelley wrote on the website. “WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data.”

They played around with WormGPT to see its potential dangers and how extreme they may be, asking it to create phishing emails.

The results were unsettling,” the cyber expert confirmed. “WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.

In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations,” Kelley chillingly added. [Does any AI have ethical boundaries? Bob]



No comments: