Tuesday, August 08, 2023

If I sold you a bogus ChatGPT clone, who would you complain to?

https://www.wired.com/story/chatgpt-scams-fraudgpt-wormgpt-crime/

Criminals Have Created Their Own ChatGPT Clones

IT DIDN'T TAKE long. Just months after OpenAI’s ChatGPT chatbot upended the startup economy, cybercriminals and hackers are claiming to have created their own versions of the text-generating technology. The systems could, theoretically at least, supercharge criminals’ ability to write malware or phishing emails that trick people into handing over their login information.

There are outstanding questions about the authenticity of the chatbots. Cybercriminals are not exactly trustworthy characters, and there remains the possibility that they’re trying to make a quick buck by scamming each other. Despite this, the developments come at a time when scammers are exploiting the hype of generative AI for their own advantage.

The shady LLMs claim to strip away any kind of safety protections or ethical barriers. WormGPT was first spotted by independent cybersecurity researcher Daniel Kelley, who worked with security firm SlashNext to detail the findings. WormGPT’s developers claim the tool offers an unlimited character count and code formatting. “The AI models are notably useful for phishing, particularly as they lower the entry barriers for many novice cybercriminals,” Kelley says in an email. “Many people argue that most cybercriminals can compose an email in English, but this isn’t necessarily true for many scammers.”





Not sure the answer is here, but there are plenty of arguments…

https://www.scientificamerican.com/article/we-need-smart-intellectual-property-laws-for-artificial-intelligence/

We Need Smart Intellectual Property Laws for Artificial Intelligence

A pressing question worldwide is whether the data used to train AI systems requires consent from authors or performers, who are also seeking attribution and compensation for the use of their works.

Several governments have created special text and data mining exceptions to copyright law to make it easier to collect and use information for training AI. These allow some systems to train on online texts, images and other work that is owned by other people. These exceptions have been met with opposition recently, particularly from copyright owners and critics with more general objections who want to slow down or degrade the services.

Beyond consent, the other two c’s, credit and compensation, have their own challenges, as illustrated even now with the high cost of litigation regarding infringements of copyright or patents. But one can also imagine datasets and uses in the arts or biomedical research where a well-managed AI program could be helpful to implement benefit sharing, such as the proposed open-source dividend for seeding successful biomedical products.





How much does it take to get your attention when you are making gazillions of dollars?

https://thenextweb.com/news/norway-fines-meta-privacy-violations-behavioural-advertising-ad-targeting-facebook

Norway fines Meta 1 MILLION crowns per day over data harvesting for behavioural ads

Meta’s litany of European privacy sanctions in 2023 just got a little longer. After a €390mn fine for illegal personalised ads, another €5.5mn hit for similar violations in WhatsApp, and a GDPR record €1.2bn for unsafe data transfers, this week yet another punishment arrived — and the sentence did not disappoint.

Norwegian regulators have demanded a gloriously round figure that would make Dr Evil proud: 1 MILLION crowns (€89,000) per day. The penalties are due to begin on August 14, but Meta wants a temporary injunction against the order, Reuters reports.





Should teachers panic?

https://www.bespacific.com/practical-ai-for-teachers-and-students/

Practical AI for Teachers and Students

Wharton School – 5 Part Course on YouTube for Students and Instructors/Teachers – Description of the Introduction: “In this introduction, Wharton Interactive’s Faculty Director Ethan Mollick and Director of Pedagogy Lilach Mollick provide an overview of how large language models (LLMs) work and explain how this latest generation of models has impacted how we work and how we learn. They also discuss the different types of large language models referenced in their five-part crash course: OpenAI’s ChatGPT4, Microsoft’s Bing in Creative Mode, and Google’s Bard. This video is Part 1 of a five-part course in which Wharton Interactive provides an overview of AI large language models for educators and students. They take a practical approach and explore how the models work, and how to work effectively with each model, weaving in your own expertise. They also show how to use AI to make teaching easier and more effective, with example prompts and guidelines, as well as how students can use AI to improve their learning. Links to sources and prompts”:



No comments: