Friday, September 29, 2023

Advocates for GDPR? Hardly.

https://www.techdirt.com/2023/09/28/the-group-claiming-to-have-hacked-sony-is-using-gdpr-as-a-weapon-for-demanding-ransoms/

The Group Claiming To Have Hacked Sony Is Using GDPR As A Weapon For Demanding Ransoms

We’ve spilled a great deal of ink discussing the GDPR and its failures and unintended consequences. The European data privacy law that was ostensibly built to protect the data of private citizens, but which was also expected to result in heavy fines for primarily American internet companies, has mostly failed to do either. While the larger American internet players have the money and resources to navigate GDPR just fine, smaller companies or innovative startups can’t. The end result has been to harm competition, harm innovation, and build a scenario rife with harmful unintended consequences. A bang up job all around, in other words.

And now we have yet another unintended consequence: hacking groups are beginning to use the GDPR as a weapon to threaten private companies in order to get ransom money. You may have heard that a hacking group calling itself Ransomed.vc is claiming to have compromised all of Sony. We don’t yet have proof that the hack is that widespread, but hacking groups generally both don’t lie about that sort of thing or it ruins their “business” plan, and Ransomed.vc has also claimed that if a buyer isn’t found for Sony’s data, it will simply release that data on September 28th. So, as to what they have, I guess we’ll just have to wait and see.

But what really caught my attention was the description of how this particular group was going about issuing threats to its victims in order to collect ransoms. And part of the group’s reputation is that it compromises its victims and then hunts for GDPR violations, building ransom requests that are less consequential than what the GDPR violation fines would be.





What percentage of site must opt out for this to be noticeable? (How can we tell if it works?)

https://www.theverge.com/2023/9/28/23894779/google-ai-extended-training-data-toggle-bard-vertex

Google adds a switch for publishers to opt out of becoming AI training data

Google just announced it’s giving website publishers a way to opt out of having their data used to train the company’s AI models while remaining accessible through Google Search. The new tool, called Google-Extended, allows sites to continue to get scraped and indexed by crawlers like the Googlebot while avoiding having their data used to train AI models as they develop over time.

The company says Google-Extended will let publishers “manage whether their sites help improve Bard and Vertex AI generative APIs,” adding that web publishers can use the toggle to “control access to content on a site.” Google confirmed in July that it’s training its AI chatbot, Bard, on publicly available data scraped from the web.



(Related)

https://blog.medium.com/default-no-to-ai-training-on-your-stories-abb5b4589c8

Default No to AI Training on Your Stories

Fair use in the age of AI: Credit, compensation and consent are required.

Unfortunately, the AI companies have nearly universally broken fundamental issues of fairness: they are making money on your writing without asking for your consent, nor are they offering you compensation and credit. There’s a lot more one could ask for, but these “3 Cs” are the minimum.

Now, we’re adding one more dimension to our response. Medium is changing our policy on AI training. The default answer is now: No.

We are doing what we can to block AI companies from training on stories that you publish on Medium and we won’t change that stance until AI companies can address this issue of fairness. If you are such an AI company, and we aren’t already talking, contact us.





Something to watch?

https://www.defense.gov/News/News-Stories/Article/Article/3541838/ai-security-center-to-open-at-national-security-agency/

AI Security Center to Open at National Security Agency

National Security Agency Director Army Gen. Paul M. Nakasone today announced the creation of a new entity to oversee the development and integration of artificial intelligence capabilities within U.S. national security systems.

The AI Security Center will become the focal point for developing best practices, evaluation methodology and risk frameworks with the aim of promoting the secure adoption of new AI capabilities across the national security enterprise and the defense industrial base.





Incidentally, some tools I might be able to use.

https://www.bespacific.com/no-chat-gpt-cant-be-your-new-research-assistant/

No, Chat GPT Can’t Be Your New Research Assistant

Chronicle of Higher Education [subscription req’d]: “…There’s Explainpaper, where one can upload a paper, highlight a confusing portion of the text, and get a more reader-friendly synopsis. There’s jenni, which can help discern if a paper is missing relevant existing research. There’s Quivr, where the user can upload a paper and pose queries like: What are the gaps in this study?… Amy Chatfield, an information-services librarian for the Norris Medical Library at the University of Southern California, can hunt down and deliver to researchers just about any article, book, or journal, no matter how obscure the topic or far-flung the source. So she was stumped when she couldn’t locate any of the 35 sources a researcher had asked her colleague to deliver. Each source included an author, journal, date, and page numbers, and had seemingly legit titles such as “Loan-out corporations for entertainers and athletes: A closer look,” published in the Journal of Legal Tax Research..”



Thursday, September 28, 2023

How can this be a good thing? If nothing else, ChatGPT can now be influenced by all the AI generated nonsense.

https://www.bbc.com/news/technology-66940771

ChatGPT can now access up to date information

OpenAI, the Microsoft-backed creator of ChatGPT, has confirmed the chatbot can now browse the internet to provide users with current information.

The artificial intelligence-powered system was previously trained only using data up to September 2021.





Just how influential is Taylor Swift?

https://www.usatoday.com/story/money/food/2023/09/27/heinz-ketchup-and-seemingly-ranch-taylor-swift/70987634007/

Heinz announces new product after Taylor Swift condiment choice goes viral at Chiefs game

Heinz announced that it will sell a limited number of Ketchup and Seemingly Ranch bottles after Taylor Swift's appearance at Sunday's Kansas City Chiefs game went viral on social media.

The pop star was seen enjoying a plate of chicken strips with the two condiments, which has caused a significant impact on the food industry after her previous appearance helped increase sales of Travis Kelce jerseys.



(Related) Enough to swing an election?

https://www.msnbc.com/the-beat-with-ari/watch/taylor-swift-can-beat-trump-again-in-2024-so-fox-pundits-panic-193932869974

Taylor Swift can beat Trump again in 2024, so Fox pundits panic

From football to feminism, Taylor Swift continues to build on her reach. Swift’s “Eras” tour is projected to generate a $5 billion impact on the economy. Swift also backed Biden in 2020 and encouraged fans to register to vote. Facing that influence, some GOP pundits are trying to knock Swift and the excitement over her NFL appearance.



Wednesday, September 27, 2023

Interesting.

https://www.scientificamerican.com/article/does-the-first-amendment-confer-a-right-to-compute-the-future-of-ai-may-depend-on-it/

Does the First Amendment Confer a ‘Right to Compute’? The Future of AI May Depend on It

Federal appeals courts have considered the First Amendment aspects of computer code in only a limited number of cases. The Second, Sixth and (in an opinion that was subsequently withdrawn on procedural grounds) Ninth Circuits have concluded that computer code can receive First Amendment protection. The Sixth Circuit, for example, wrote that “because computer source code is an expressive means for the exchange of information and ideas about computer programming, we hold that it is protected by the First Amendment.”

The question of when computer code is expressive is related to, but distinct from, asking whether the purpose of performing the resulting computation is expressive.





Imagine that.

https://www.makeuseof.com/reasons-chatgpt-is-dying/

The 4 Reasons OpenAI's ChatGPT Is Dying

  • ChatGPT's market share is under threat as competitors catch up, replicating its capabilities and encroaching on its user base.

  • The cost of running ChatGPT is unsustainable, with OpenAI spending millions of dollars daily to keep it running, leading to financial losses.

  • OpenAI is facing copyright lawsuits from creators, putting ChatGPT at risk if legal loopholes are exploited, potentially derailing the project.

  • Big Tech's ecosystem-based strategy and convenient integration of AI tools pose a significant challenge to ChatGPT's long-term success.





If it is used to find relevant data that’s good. If it is asked for conclusions we’re in trouble.

https://www.bloomberg.com/news/articles/2023-09-26/cia-builds-its-own-artificial-intelligence-tool-in-rivalry-with-china?leadSource=uverify%20wall

CIA Builds Its Own Artificial Intelligence Tool in Rivalry With China

US intelligence agencies are getting their own ChatGPT-style tool to sift through an avalanche of public information for clues.

The Central Intelligence Agency is preparing to roll out a feature akin to OpenAI Inc.’s now-famous program that will use artificial intelligence to give analysts better access to open-source intelligence, according to agency officials. The CIA’s Open-Source Enterprise division plans to provide intelligence agencies with its AI tool soon.



Tuesday, September 26, 2023

Redefining copyright or redefining theft?

https://abcnews.go.com/Technology/authors-lawsuit-openai-fundamentally-reshape-artificial-intelligence-experts/story?id=103379209

Authors' lawsuit against OpenAI could 'fundamentally reshape' artificial intelligence, according to experts

A group of prominent authors joined a proposed class action lawsuit filed against OpenAI over allegations that products like ChatGPT make illegal use of their copyrighted work, setting off a high-profile legal clash.

"At the heart of these algorithms is systemic theft on a massive scale," the lawsuit claims.

The case could fundamentally shape the direction and capabilities of generative AI, either imposing a new set of limits on a mechanism at the core of the technology or cementing an expansive approach to online material that has fueled the rise of products currently offered, legal analysts told ABC News.



(Related)

https://techcrunch.com/2023/09/25/signals-meredith-whittaker-ai-is-fundamentally-a-surveillance-technology/

Signal’s Meredith Whittaker: AI is fundamentally ‘a surveillance technology’

Why is it that so many companies that rely on monetizing the data of their users seem to be extremely hot on AI? If you ask Signal president Meredith Whittaker (and I did), she’ll tell you it’s simply because “AI is a surveillance technology.”

Onstage at TechCrunch Disrupt 2023, Whittaker explained her perspective that AI is largely inseparable from the big data and targeting industry perpetuated by the likes of Google and Meta, as well as less consumer-focused but equally prominent enterprise and defense companies. (Her remarks lightly edited for clarity.)

It requires the surveillance business model; it’s an exacerbation of what we’ve seen since the late ’90s and the development of surveillance advertising. AI is a way, I think, to entrench and expand the surveillance business model,” she said. “The Venn diagram is a circle.”





Who will do this in the US?

https://www.bespacific.com/the-cambridge-law-corpus-a-corpus-for-legal-ai-research/

The Cambridge Law Corpus: A Corpus for Legal AI Research

The Cambridge Law Corpus: A Corpus for Legal AI Research Andreas Östling, Holli Sargeant, Huiyuan Xie, Ludwig Bull, Alexander Terenin, Leif Jonsson, Måns Magnusson, Felix Steffek. arXiv:2309.12269 [cs.CL] [v1] Thu, 21 Sep 2023 17:24:40 UTC

We introduce the Cambridge Law Corpus (CLC), a corpus for legal AI research. It consists of over 250 000 court cases from the UK. Most cases are from the 21st century, but the corpus includes cases as old as the 16th century. This paper presents the first release of the corpus, containing the raw text and meta-data. Together with the corpus, we provide annotations on case outcomes for 638 cases, done by legal experts. Using our annotated data, we have trained and evaluated case outcome extraction with GPT-3, GPT-4 and RoBERTa models to provide benchmarks. We include an extensive legal and ethical discussion to address the potentially sensitive nature of this material. As a consequence, the corpus will only be released for research purposes under certain restrictions..”





Perspective. (Video)

https://www.gzeromedia.com/gzero-world-clips/why-human-beings-are-so-easily-fooled-by-ai-psychologist-steven-pinker-explains

Why human beings are so easily fooled by AI, psychologist Steven Pinker explains

There's no question that AI will change the world, but the verdict is still out on exactly how. But one thing that is already clear: people are going to confuse it with humans. And we know this because it's already happening. That's according to Harvard psychologist Steven Pinker, who joined Ian Bremmer on GZERO World for a wide-ranging conversation about his surprisingly optimistic outlook on the world and the way that AI may affect it.

"People are too easily fooled. It doesn't take much to fool a user or an observer into attributing a lot of intelligence to the system that they're dealing with, even if it's rather stupid."

So what should regulators do to rein AI in? Especially when it comes to children?





My solution? Don’t talk to anyone!

https://www.bespacific.com/new-phone-call-etiquette-text-first-and-never-leave-a-voice-mail/

New phone call etiquette: Text first and never leave a voice mail

Washington Post: “Phone calls have been around for 147 years, the iPhone 16 years and FaceTime video voice mails about a week. Not surprisingly, how we make calls has changed drastically alongside advances in technology. Now people can have conversations in public on their smartwatches, see voice mails transcribed in real time and dial internationally midday without stressing about the cost. The phone norms also change quickly, causing some people to feel left behind or confused. The unwritten rules of chatting on the phone differ wildly between generations, leading to misunderstandings and frustration on all sides. We spoke to an etiquette expert and people of all ages about their own phone pet peeves to come up with the following guidance to help everyone navigate phone calls in 2023…”



Monday, September 25, 2023

The risks of a privacy failure. (Along with all the others…)

https://www.cpomagazine.com/data-privacy/embracing-privacy-by-design-as-a-corporate-responsibility/

Embracing Privacy by Design as a Corporate Responsibility

Advertising companies and their technology partners are increasingly recognising the value of a paradigm shift from data protection as a burdensome obligation to a framework of “privacy by design.” For companies that are taking this route, they see three big results: less costs to adapt to new legislation, growth in consumer confidence and trust, and it runs less risks for a business in case of inevitable mishaps. And the first step to taking this route all stands with the prominence of data.





It’s in my local library…

https://techpolicy.press/your-face-belongs-to-us-a-conversation-with-kashmir-hill/

Your Face Belongs to Us: A Conversation with Kashmir Hill

In 2019, journalist Kashmir Hill had just joined The New York Times when she got a tip about the existence of a company called Clearview AI that claimed it could identify almost anyone with a photo. But the company was hard to contact, and people who knew about it didn’t want to talk. Hill resorted to old fashioned shoe-leather reporting, trying to track down the company and its executives. By January of 2020, the Times was ready to report what she had learned in a piece titled “The Secretive Company That Might End Privacy as We Know It.

Three years later, Hill has published a book that tells the story of Clearview AI, but with the benefit of three more years of reporting and study on the social, political, and technological forces behind it. It’s called Your Face Belongs to Us: A Secretive Startup’s Quest to End Privacy As We Know It, just out from Penguin Random House.





Perspective.

https://venturebeat.com/ai/the-ai-age-of-uncertainty/

The AI ‘Age of Uncertainty’

Already, the impact of AI is becoming more discernable in daily life. From AI-generated songs, to haikus written in the style of Shakespeare, to self-driving vehicles, to chatbots that can imitate lost loved ones and AI assistants that help us with work, the technology is beginning to become pervasive.

AI will soon become much more prevalent with the approaching AI tsunami. Wharton School professor Ethan Mollick recently wrote about the results of an experiment on the future of professional work. The experiment centered around two groups of consultants working for the Boston Consultant Group. Each group was given various common tasks. One group was able to use currently available AI to augment their efforts while the other was not.

Mollick reported: “Consultants using AI finished 12.2% more tasks on average, completed tasks 25.1% more quickly, and produced 40% higher quality results than those without.”

Of course, it is possible that problems inherent in large language models (LLM), such as confabulation and bias, may cause this wave to simply dissipate — although this is now appearing unlikely. While the technology is already demonstrating its disruptive potential, it will take several years until we are able to experience the power of the tsunami. Here is a look at what is coming.



Sunday, September 24, 2023

Something to think about.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4573316

The Myth of Children’s Online Privacy Protection

Digital technology has changed the landscape young people face as they come of age. It has changed how children interact with their parents, schools, community organizations, and the state. Despite many benefits, digital technologies that employ data collection, algorithms, and artificial intelligence pose significant risks for the next generation. Private businesses can collect, use, and sell a child’s data in ways never imagined by their families. Information collected by third parties with good intentions can be stolen through data breaches. Through faulty algorithms, websites can make inaccurate assumptions about young people’s interests, teachers can make inaccurate assumptions about a student’s potential achievement, and the state can make inaccurate assumptions about a child’s risk to engage in criminal activity. It can also feed young people information that is harmful to their physical or psychological well-being. Meanwhile, current efforts in the United States that promise to protect children online threaten to undermine their right to privacy.

United States federal and state laws are ill-equipped to truly offer children online privacy protections. There are few legal remedies available to young people whose data is used in malicious ways. The remedies that do exist are insufficient to simultaneously safeguard children while respecting privacy as young people mature. To that end, this Article seeks to explore how current laws centered on children’s online protection are inadequate, and why the U.S.’s obsession with keeping young people safe online often curtails their need for agency and autonomy. It offers a cogent path forward to guide federal and state policymakers as they update children’s online privacy protection law.





Yeah, AI is coming. Deal with it.

https://www.londonic.uk/js/index.php/ljbeh/article/view/103

Empowering education with AI: Addressing ethical concerns

There has been a rapid advancement of technology in the realm of education, and artificial intelligence (AI) has become just one of the many tools utilized by members of educational institutions. However, with the swift integration of AI into the education system, many ethical challenges and dilemmas have surfaced; primarily driven by students’ misuse of the transformative technology. The potential impact on students' critical thinking skills, autonomy, and ethical decision-making further highlights the urgency to address these issues. This article explores the detrimental effects resulting from the unethical use of AI, along with proposing significant policies and guidelines in order to maximize the beneficial utilization of AI within educational institutions. Additionally, a comprehensive analysis of relevant studies will be presented to sustain the argument stated and contribute to the development of an AI learning environment that enables the prospering of both students and faculty.