Thursday, August 31, 2023

This is probably too late and almost certainly ineffective.

https://www.axios.com/2023/08/31/major-websites-are-blocking-ai-crawlers-from-accessing-their-content

Major websites are blocking AI crawlers from accessing their content

Nearly 20% of the top 1000 websites in the world are blocking crawler bots that gather web data for AI services, according to new data from Originality.AI, an AI content detector.

Why it matters: In the absence of clear legal or regulatory rules governing AI's use of copyrighted material, websites big and small are taking matters into their own hands.

Driving the news: OpenAI introduced its GPTBot crawler early in August, declaring that the data gathered "may potentially be used to improve future models," promising that paywalled content would be excluded and instructing websites in how to bar the crawler.

Soon after, several high-profile news sites, including the New York Times, Reuters and CNN, began blocking GPTBot, and many more have since followed. (Axios is among them.)





Finally doing something about it?

https://techcrunch.com/2023/08/30/chatgpt-maker-openai-accused-of-string-of-data-protection-breaches-in-gdpr-complaint-filed-by-privacy-researcher/

ChatGPT-maker OpenAI accused of string of data protection breaches in GDPR complaint filed by privacy researcher

Questions about ChatGPT-maker OpenAI’s ability to comply with European privacy rules are in the frame again after a detailed complaint was filed with the Polish data protection authority yesterday.

The complaint, which TechCrunch has reviewed, alleges the U.S. based AI giant is in breach of the bloc’s General Data Protection Regulation (GDPR) — across a sweep of dimensions: Lawful basis, transparency, fairness, data access rights, and privacy by design are all areas it argues OpenAI is infringing EU privacy rules. (Aka, Articles 5(1)(a), 12, 15, 16 and 25(1) of the GDPR).

Indeed, the complaint frames the novel generative AI technology and its maker’s approach to developing and operating the viral tool as essentially a systematic breach of the pan-EU regime. Another suggestion, therefore, is that OpenAI has overlooked another requirement in the GDPR to undertake prior consultation with regulators (Article 36) — since, if it had conducted a proactive assessment which identified high risks to people’s rights unless mitigating measures were applied it should have given pause for thought. Yet OpenAI apparently rolled ahead and launched ChatGPT in Europe without engaging with local regulators which could have ensured it avoided falling foul of the bloc’s privacy rulebook.





Of course you might wind up paying both… (Is “fine evasion” like “tax evasion?”)

https://cybernews.com/security/gdpr-abused-ransomware-extortion/

GDPR used by new ransom gang to extort victims

Appropriately called Ransomed, the group was first spotted by cybersecurity analyst and blogger Flashpoint on August 15th. It comes complete with the usual dedicated Telegram channel and also sports a “ransomed” domain name for what appears to be a flagship website.

What isn’t usual about Ransomed is its novel use of GDPR to pressure victims into paying up once it has carried out a data breach.

“Ransomed is leveraging an extortion tactic that has not been observed before — according to communications from the group, they use data protection laws like the EU’s GDPR to threaten victims with fines if they do not pay the ransom,” said Flashpoint. “This tactic marks a departure from typical extortionist operations by twisting protective laws against victims to justify their illegal attacks.”

Flashpoint adds that it believes Ransomed’s strategy is probably to set ransom payment demands lower than the cost of incurring a fine for a data security violation to increase the chances of a victim paying up.





Tools & Techniques.

https://www.bespacific.com/how-to-talk-to-an-ai-chatbot/

How to talk to an AI chatbot

Washington Post – An ordinary human’s guide to getting extraordinary results from a chatbot: “ChatGPT doesn’t come with an instruction manual. But maybe it should. Only a quarter of Americans who have heard of the AI chatbot say they have used it, Pew Research Center reported this week. “The hardest lesson” for new AI chatbot users to learn, says Ethan Mollick, a Wharton professor and chatbot enthusiast, “is that they’re really difficult to use.” Or at least, to use well. The Washington Post talked with Mollick and other experts about how to get the most out of AI chatbots — from OpenAI’s ChatGPT to Google’s Bard and Microsoft’s Bing — and how to avoid common pitfalls. Often, users’ first mistake is to treat them like all-knowing oracles, instead of the powerful but flawed language tools that they really are. Here’s our guide to their favorite strategies for asking a chatbot to help with explaining, writing and brainstorming. Just select a topic and follow along…”

See also MIT Technology Review – Large language models aren’t people. Let’s stop testing them as if they were. With hopes and fears about this technology running wild, it’s time to agree on what it can and can’t do.



No comments: