Friday, September 29, 2023

Advocates for GDPR? Hardly.

https://www.techdirt.com/2023/09/28/the-group-claiming-to-have-hacked-sony-is-using-gdpr-as-a-weapon-for-demanding-ransoms/

The Group Claiming To Have Hacked Sony Is Using GDPR As A Weapon For Demanding Ransoms

We’ve spilled a great deal of ink discussing the GDPR and its failures and unintended consequences. The European data privacy law that was ostensibly built to protect the data of private citizens, but which was also expected to result in heavy fines for primarily American internet companies, has mostly failed to do either. While the larger American internet players have the money and resources to navigate GDPR just fine, smaller companies or innovative startups can’t. The end result has been to harm competition, harm innovation, and build a scenario rife with harmful unintended consequences. A bang up job all around, in other words.

And now we have yet another unintended consequence: hacking groups are beginning to use the GDPR as a weapon to threaten private companies in order to get ransom money. You may have heard that a hacking group calling itself Ransomed.vc is claiming to have compromised all of Sony. We don’t yet have proof that the hack is that widespread, but hacking groups generally both don’t lie about that sort of thing or it ruins their “business” plan, and Ransomed.vc has also claimed that if a buyer isn’t found for Sony’s data, it will simply release that data on September 28th. So, as to what they have, I guess we’ll just have to wait and see.

But what really caught my attention was the description of how this particular group was going about issuing threats to its victims in order to collect ransoms. And part of the group’s reputation is that it compromises its victims and then hunts for GDPR violations, building ransom requests that are less consequential than what the GDPR violation fines would be.





What percentage of site must opt out for this to be noticeable? (How can we tell if it works?)

https://www.theverge.com/2023/9/28/23894779/google-ai-extended-training-data-toggle-bard-vertex

Google adds a switch for publishers to opt out of becoming AI training data

Google just announced it’s giving website publishers a way to opt out of having their data used to train the company’s AI models while remaining accessible through Google Search. The new tool, called Google-Extended, allows sites to continue to get scraped and indexed by crawlers like the Googlebot while avoiding having their data used to train AI models as they develop over time.

The company says Google-Extended will let publishers “manage whether their sites help improve Bard and Vertex AI generative APIs,” adding that web publishers can use the toggle to “control access to content on a site.” Google confirmed in July that it’s training its AI chatbot, Bard, on publicly available data scraped from the web.



(Related)

https://blog.medium.com/default-no-to-ai-training-on-your-stories-abb5b4589c8

Default No to AI Training on Your Stories

Fair use in the age of AI: Credit, compensation and consent are required.

Unfortunately, the AI companies have nearly universally broken fundamental issues of fairness: they are making money on your writing without asking for your consent, nor are they offering you compensation and credit. There’s a lot more one could ask for, but these “3 Cs” are the minimum.

Now, we’re adding one more dimension to our response. Medium is changing our policy on AI training. The default answer is now: No.

We are doing what we can to block AI companies from training on stories that you publish on Medium and we won’t change that stance until AI companies can address this issue of fairness. If you are such an AI company, and we aren’t already talking, contact us.





Something to watch?

https://www.defense.gov/News/News-Stories/Article/Article/3541838/ai-security-center-to-open-at-national-security-agency/

AI Security Center to Open at National Security Agency

National Security Agency Director Army Gen. Paul M. Nakasone today announced the creation of a new entity to oversee the development and integration of artificial intelligence capabilities within U.S. national security systems.

The AI Security Center will become the focal point for developing best practices, evaluation methodology and risk frameworks with the aim of promoting the secure adoption of new AI capabilities across the national security enterprise and the defense industrial base.





Incidentally, some tools I might be able to use.

https://www.bespacific.com/no-chat-gpt-cant-be-your-new-research-assistant/

No, Chat GPT Can’t Be Your New Research Assistant

Chronicle of Higher Education [subscription req’d]: “…There’s Explainpaper, where one can upload a paper, highlight a confusing portion of the text, and get a more reader-friendly synopsis. There’s jenni, which can help discern if a paper is missing relevant existing research. There’s Quivr, where the user can upload a paper and pose queries like: What are the gaps in this study?… Amy Chatfield, an information-services librarian for the Norris Medical Library at the University of Southern California, can hunt down and deliver to researchers just about any article, book, or journal, no matter how obscure the topic or far-flung the source. So she was stumped when she couldn’t locate any of the 35 sources a researcher had asked her colleague to deliver. Each source included an author, journal, date, and page numbers, and had seemingly legit titles such as “Loan-out corporations for entertainers and athletes: A closer look,” published in the Journal of Legal Tax Research..”



No comments: