Saturday, September 02, 2023

I don’t think they like it…

https://www.pogowasright.org/protecting-kids-on-social-media-act-cloaks-attack-on-privacy-behind-concern-for-children/

Protecting Kids on Social Media Act Cloaks Attack on Privacy Behind Concern for Children

J.D. Tuccille writes:

There’s seemingly no policy turd that lawmakers are unwilling to polish in the name of “the children.” That brings us to the Protecting Kids on Social Media Act, currently working its way through the U.S. Senate. This measure borrows bad proposals from another federal bill and combines them with legislative idiocy enacted at the state level. The resulting concoction could destroy internet privacy, subjecting all our online activity to government scrutiny in the name of shielding wee ones from harm.

Read more at Reason.





Because we should not rely on our AI to tell us… (Full disclosure: My AI disagrees with me.)

https://dailynous.com/2023/09/01/how-to-tell-whether-an-ai-is-conscious-guest-post/

How to Tell Whether an AI Is Conscious (guest post)

We can apply scientific rigor to the assessment of AI consciousness, in part because… we can identify fairly clear indicators associated with leading theories of consciousness, and show how to assess whether AI systems satisfy them.”

In the following guest post, Jonathan Simon (Montreal) and Robert Long (Center for AI Safety) summarize their recent interdisciplinary report, “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness.





Because it’s coming, no matter what we think.

https://mashable.com/article/chatgpt-ai-guide-for-educators-teachers

OpenAI releases new teacher guide for ChatGPT in classrooms

Amid continued uncertainty — and equal amounts of growing interest — surrounding the use of ChatGPT in classrooms, OpenAI released a new Teaching with AI guide to help educators effectively incorporate the generative AI tool in their students' learning.

The resource includes an Educator FAQ on ChatGPT's use, as well as learning prompts to support interested educators seeking ways to incorporate ChatGPT in learning environments or their own classroom planning. It also includes OpenAI's suggested uses for the AI chatbot, including generating lesson plans and quizzes, roleplaying conversations or debates, and mediating classroom hurdles for English language learners.

Implied in OpenAI's guide is the expectation that educators maintain oversight over ChatGPT's use, suggesting that both teachers and students collaborate and share their ChatGPT conversations with each other as they explore the technology. The prompts function as primers for the AI chatbot that the educator can then offer as examples for students or fellow teachers, or send directly to students for their own use in assignments.





Resource.

https://gwtoday.gwu.edu/database-gw-law-informs-users-litigation-relating-ai

A Database from GW Law Informs Users on Litigation Relating to AI

Perhaps no area of law is growing so quickly as that surrounding artificial intelligence (AI). It can be a challenge to keep up with recent developments in this field, but Robert Brauneis, the Michael J. McKeon Professor of Intellectual Property Law, is making it easier with a database dedicated to AI litigation.

Spearheaded by Brauneis, the online, searchable AI Litigation Database was created to help lawyers, scholars, journalists and others stay informed. The database might also be useful for potential plaintiffs or potential defendants who want to research a specific question. Brauneis and the students in his course “Law in the Algorithmic Society” update the database when they learn of relevant cases.



Friday, September 01, 2023

This suggests a number of potential problems. For example, who vets the training data?

https://www.bespacific.com/a-i-s-un-learning-problem/

A.I.’s un-learning problem

Fortune: – Researchers say it’s virtually impossible to make an A.I. model ‘forget’ the things it learns from private user data …“If a machine learning-based system has been trained on data, the only way to retroactively remove a portion of that data is by re-training the algorithms from scratch,” Anasse Bari, an A.I. expert and computer science professor at New York University, told Fortune. The problem goes beyond private data. If an A.I. model is discovered to have gleaned biased or toxic data, say from racist social media posts, weeding out the bad data will be tricky. Training or retraining an A.I. model is expensive. This is particularly true for the ultra-large “foundation models” that are currently powering the boom in generative A.I. Sam Altman, the CEO of OpenAI, has reportedly said that GPT-4, the large language model that powers its premium version of ChatGPT, cost in excess of $100 million to train. That’s why, to companies developing A.I. models, a powerful tool that the U.S. Federal Trade Commission has to punish companies it finds have violated U.S. trade laws is scary. The tool is called “algorithmic disgorgement.” It’s a legal process that penalizes the law-breaking company by forcing it to delete an offending A.I. model in its entirety. The FTC has only used that power a handful of times, typically directed at companies who have misused data. One well known case where the FTC did use this power is against a company called Everalbum, which trained a facial recognition system using people’s biometric data without their permission…”



(Related)

https://www.bespacific.com/the-case-for-large-language-model-optimism-in-legal-research-from-a-law-technology-librarian/

The Case For Large Language Model Optimism in Legal Research From A Law & Technology Librarian

Via LLRX The Case For Large Language Model Optimism in Legal Research From A Law & Technology Librarian The emergence of Large Language Models (LLMs) in legal research signifies a transformative shift. This article by Sean Harrington critically evaluates the advent and fine-tuning of Law-Specific LLMs, such as those offered by Casetext, Westlaw, and Lexis. Unlike generalized models, these specialized LLMs draw from databases enriched with authoritative legal resources, ensuring accuracy and relevance. Harrington highlights the importance of advanced prompting techniques and the innovative utilization of embeddings and vector databases, which enable semantic searching, a critical aspect in retrieving nuanced legal information. Furthermore, the article addresses the ‘Black Box Problem’ and explores remedies for transparency. It also discusses the potential of crowdsourcing secondary materials as a means to democratize legal knowledge. In conclusion, this article emphasizes that Law-Specific LLMs, with proper development and ethical considerations, can revolutionize legal research and practice, while calling for active engagement from the legal community in shaping this emerging technology.





Still not generic enough?

https://www.brookings.edu/articles/a-comprehensive-and-distributed-approach-to-ai-regulation/

A comprehensive and distributed approach to AI regulation

While algorithmic systems have become widely used for many impactful socioeconomic determinations, these algorithms are unique to their circumstances. This challenge warrants an approach to governing algorithms that comprehensively enables application-specific oversight. To address this challenge, this paper proposes granting two new authorities for key regulatory agencies: (1) administrative subpoena authority for algorithmic investigations, and (2) rulemaking authority for especially impactful algorithms within federal agencies’ existing regulatory purview. This approach requires the creation of a new regulatory instrument, introduced here as the Critical Algorithmic Systems Classification, or CASC. The CASC enables a comprehensive approach to developing application-specific rules for algorithmic systems and, in doing so, maintains longstanding consumer and civil rights protections without necessitating a parallel oversight regime for algorithmic systems.





The accompanying illustration is brilliant!

https://www.economist.com/leaders/2023/08/31/how-artificial-intelligence-will-affect-the-elections-of-2024

How worried should you be about AI disrupting elections?

Disinformation will become easier to produce, but it matters less than you might think





All that is not forbidden is mandatory. All that is not mandatory is forbidden.

https://www.zdnet.com/article/one-in-four-workers-fears-being-considered-lazy-if-they-use-ai-tools/

One in four workers fears being considered 'lazy' if they use AI tools

Fear of being judged appears to be a major obstacle preventing workers from using AI.



Thursday, August 31, 2023

This is probably too late and almost certainly ineffective.

https://www.axios.com/2023/08/31/major-websites-are-blocking-ai-crawlers-from-accessing-their-content

Major websites are blocking AI crawlers from accessing their content

Nearly 20% of the top 1000 websites in the world are blocking crawler bots that gather web data for AI services, according to new data from Originality.AI, an AI content detector.

Why it matters: In the absence of clear legal or regulatory rules governing AI's use of copyrighted material, websites big and small are taking matters into their own hands.

Driving the news: OpenAI introduced its GPTBot crawler early in August, declaring that the data gathered "may potentially be used to improve future models," promising that paywalled content would be excluded and instructing websites in how to bar the crawler.

Soon after, several high-profile news sites, including the New York Times, Reuters and CNN, began blocking GPTBot, and many more have since followed. (Axios is among them.)





Finally doing something about it?

https://techcrunch.com/2023/08/30/chatgpt-maker-openai-accused-of-string-of-data-protection-breaches-in-gdpr-complaint-filed-by-privacy-researcher/

ChatGPT-maker OpenAI accused of string of data protection breaches in GDPR complaint filed by privacy researcher

Questions about ChatGPT-maker OpenAI’s ability to comply with European privacy rules are in the frame again after a detailed complaint was filed with the Polish data protection authority yesterday.

The complaint, which TechCrunch has reviewed, alleges the U.S. based AI giant is in breach of the bloc’s General Data Protection Regulation (GDPR) — across a sweep of dimensions: Lawful basis, transparency, fairness, data access rights, and privacy by design are all areas it argues OpenAI is infringing EU privacy rules. (Aka, Articles 5(1)(a), 12, 15, 16 and 25(1) of the GDPR).

Indeed, the complaint frames the novel generative AI technology and its maker’s approach to developing and operating the viral tool as essentially a systematic breach of the pan-EU regime. Another suggestion, therefore, is that OpenAI has overlooked another requirement in the GDPR to undertake prior consultation with regulators (Article 36) — since, if it had conducted a proactive assessment which identified high risks to people’s rights unless mitigating measures were applied it should have given pause for thought. Yet OpenAI apparently rolled ahead and launched ChatGPT in Europe without engaging with local regulators which could have ensured it avoided falling foul of the bloc’s privacy rulebook.





Of course you might wind up paying both… (Is “fine evasion” like “tax evasion?”)

https://cybernews.com/security/gdpr-abused-ransomware-extortion/

GDPR used by new ransom gang to extort victims

Appropriately called Ransomed, the group was first spotted by cybersecurity analyst and blogger Flashpoint on August 15th. It comes complete with the usual dedicated Telegram channel and also sports a “ransomed” domain name for what appears to be a flagship website.

What isn’t usual about Ransomed is its novel use of GDPR to pressure victims into paying up once it has carried out a data breach.

“Ransomed is leveraging an extortion tactic that has not been observed before — according to communications from the group, they use data protection laws like the EU’s GDPR to threaten victims with fines if they do not pay the ransom,” said Flashpoint. “This tactic marks a departure from typical extortionist operations by twisting protective laws against victims to justify their illegal attacks.”

Flashpoint adds that it believes Ransomed’s strategy is probably to set ransom payment demands lower than the cost of incurring a fine for a data security violation to increase the chances of a victim paying up.





Tools & Techniques.

https://www.bespacific.com/how-to-talk-to-an-ai-chatbot/

How to talk to an AI chatbot

Washington Post – An ordinary human’s guide to getting extraordinary results from a chatbot: “ChatGPT doesn’t come with an instruction manual. But maybe it should. Only a quarter of Americans who have heard of the AI chatbot say they have used it, Pew Research Center reported this week. “The hardest lesson” for new AI chatbot users to learn, says Ethan Mollick, a Wharton professor and chatbot enthusiast, “is that they’re really difficult to use.” Or at least, to use well. The Washington Post talked with Mollick and other experts about how to get the most out of AI chatbots — from OpenAI’s ChatGPT to Google’s Bard and Microsoft’s Bing — and how to avoid common pitfalls. Often, users’ first mistake is to treat them like all-knowing oracles, instead of the powerful but flawed language tools that they really are. Here’s our guide to their favorite strategies for asking a chatbot to help with explaining, writing and brainstorming. Just select a topic and follow along…”

See also MIT Technology Review – Large language models aren’t people. Let’s stop testing them as if they were. With hopes and fears about this technology running wild, it’s time to agree on what it can and can’t do.



Wednesday, August 30, 2023

Safety? Security? What are they talking about?

https://www.bespacific.com/new-lc-report-on-safety-security-of-artificial-intelligence-systems/

New LC Report on Safety, Security of Artificial Intelligence Systems

In Custodia Legis: “The use of artificial intelligence (AI) has increased exponentially and is permeating every aspect of our lives, from personal to professional. While it can be used in many positive ways to solve global challenges, there are also security risks to be considered, such as fundamental rights infringements, personal data security, and harmful uses. In order to ensure that AI systems are used to benefit society, jurisdictions worldwide are looking into ways to regulate AI. The Global Legal Research Directorate (GLRD) of the Law Library of Congress recently completed research on legal requirements related to the safety and security of AI systems in Australia, Canada, the European Union (EU), New Zealand, and the United Kingdom (UK). We are excited to share with you the report that resulted from this research, Safety and Security of Artificial Intelligence Systems. Whereas the EU intends to adopt its legislative proposal for a specific Artificial Intelligence Act by the end of 2023, and the Canadian government introduced an Artificial Intelligence and Data Act (AIDA) in June 2022, other surveyed jurisdictions have not yet enacted or advanced similar specific legislation related to AI. However, some surveyed jurisdictions have general legislation mentioning AI in specific provisions and all surveyed jurisdictions apply general legislation to AI. The report looks in particular at the definition of AI systems, cybersecurity requirements for AI systems, the security of personal data, and AI security policy across the supply chain, as applicable. Cybersecurity requirements include, among other things, compliance with requirements with regard to risk management systems; data and data governance; record keeping; transparency and provision of information to users; human oversight; appropriate levels of robustness, and conformity assessments. We invite you to review the information provided in our report. This report is an addition to the Law Library’s Legal Reports (Publications of the Law Library of Congress ) collection, which includes over 4,000 historical and contemporary legal reports covering a variety of jurisdictions, researched and written by foreign law specialists with expertise in each area. The Law Library also regularly publishes articles related to artificial intelligence in the Global Legal Monitor.





Not sure this is the final answer, but it raises some points…

https://www.nytimes.com/2023/08/24/technology/how-schools-can-survive-and-maybe-even-thrive-with-ai-this-fall.html

How Schools Can Survive (and Maybe Even Thrive) With A.I. This Fall

… First, I encourage educators — especially in high schools and colleges — to assume that 100 percent of their students are using ChatGPT and other generative A.I. tools on every assignment, in every subject, unless they’re being physically supervised inside a school building.

Second, schools should stop relying on A.I. detector programs to catch cheaters. There are dozens of these tools on the market now, all claiming to spot writing that was generated with A.I., and none of them work reliably well. They generate lots of false positives, and can be easily fooled by techniques like paraphrasing. Don’t believe me? Ask OpenAI, the maker of ChatGPT, which discontinued its A.I. writing detector this year because of a “low rate of accuracy.”

My third piece of advice — and the one that may get me the most angry emails from teachers — is that teachers should focus less on warning students about the shortcomings of generative A.I. than on figuring out what the technology does well.

There are resources for educators who want to bone up on A.I. in a hurry. Mr. Kotran’s organization has a number of A.I.-focused lesson plans available for teachers, as does the International Society for Technology in Education. Some teachers have also begun assembling recommendations for their peers, such as a website made by faculty at Gettysburg College that provides practical advice on generative A.I. for professors.





Tools & Techniques

https://www.bespacific.com/free-tiny-tools/

Free Tiny Tools

The Best Free Online Web Tools You Will Ever Need – “FreeTinyTools provides free online conversion, Text Tools, Image Processing, Online Calculators, Unit converters Tools, Binary Converter Tools Website Management Tools, Development Tools and other handy tools to help you solve problems of all types. All files both processed and unprocessed are deleted after 15 minutes.”



Tuesday, August 29, 2023

Inevitable, again? Perhaps strong rules reduce revenue?

https://www.washingtonpost.com/technology/2023/08/28/ai-2024-election-campaigns-disinformation-ads/

ChatGPT breaks its own rules on political messages

When OpenAI last year unleashed ChatGPT, it banned political campaigns from using the artificial intelligence-powered chatbot — a recognition of the potential election risks posed by the tool.

But in March, OpenAI updated its website with a new set of rules limiting only what the company considers the most risky applications. These rules ban political campaigns from using ChatGPT to create materials targeting specific voting demographics, a capability that could be abused spread tailored disinformation at an unprecedented scale.

Yet an analysis by The Washington Post shows that OpenAI for months has not enforced its ban.





As expected?

https://cointelegraph.com/news/consumers-increase-distrust-artificial-intelligence-salesforce-survey

Consumer surveys show a growing distrust of AI and firms that use it

A global consumer survey from Salesforce shows a growing distrust toward firms that use AI, while an Australian survey found most believe it creates more problems than it solves.

… On Aug. 28, the customer relationship software firm released survey results from over 14,000 consumers and firms in 25 countries that suggested nearly three-quarters of customers are concerned about the unethical use of AI.

Over 40% of surveyed customers do not trust companies to use AI ethically, and nearly 70% said it’s more important for companies to be trustworthy as AI tech advances.





Worth a read?

https://sloanreview.mit.edu/audio/protecting-society-from-ai-harms-amnesty-internationals-matt-mahmoudi-and-damini-satija-part-1/

Protecting Society From AI Harms: Amnesty International’s Matt Mahmoudi and Damini Satija (Part 1)

… On this episode of the Me, Myself, and AI podcast, Matt and Damini join hosts Sam Ransbotham and Shervin Khodabandeh to highlight scenarios in which AI tools can put human rights at risk, such as when governments and public-sector agencies use facial recognition systems to track social activists or algorithms to make automated decisions about public housing access and child welfare. Damini and Matt caution that AI technology cannot fix human problems like bias, discrimination, and inequality; that will take human intervention and changes to public policy.



Monday, August 28, 2023

If we train AI on increasing levels of AI generated data at what point do all the answers become delusional?

https://www.bespacific.com/experts-90-of-online-content-will-be-ai-generated-by-2026/

Experts: 90% of Online Content Will Be AI-Generated by 2026

Futurism: “Don’t believe everything you see on the Internet” has been pretty standard advice for quite some time now. And according to a new report from European law enforcement group Europol, we have all the reason in the world to step up that vigilance. “Experts estimate that as much as 90 percent of online content may be synthetically generated by 2026,” the report warned, adding that synthetic media “refers to media generated or manipulated using artificial intelligence.” “In most cases, synthetic media is generated for gaming, to improve services or to improve the quality of life,” the report continued, “but the increase in synthetic media and improved technology has given rise to disinformation possibilities.” As it probably goes without saying: 90 percent is a pretty jarring number. Of course, people have already become accustomed — to a degree — to the presence of bots, and AI-generated text-to-image programs have certainly been making big waves. Still, our default isn’t necessarily to assume that almost everything we come into digital contact with might be, well, fake.

On a daily basis, people trust their own perception to guide them and tell them what is real and what is not,” reads the Europol report. “Auditory and visual recordings of an event are often treated as a truthful account of an event. But what if these media can be generated artificially, adapted to show events that never took place, to misrepresent events, or to distort the truth?”





I’m not sure how to take this. Is this a new class of criminal or a mental health issue?

https://www.bespacific.com/who-knowingly-shares-false-political-information-online/

Who knowingly shares false political information online?

Misinformation Review: Who knowingly shares false political information online? Some people share misinformation accidentally, but others do so knowingly. To fully understand the spread of misinformation online, it is important to analyze those who purposely share it. Using a 2022 U.S. survey, we found that 14 percent of respondents reported knowingly sharing misinformation, and that these respondents were more likely to also report support for political violence, a desire to run for office, and warm feelings toward extremists. These respondents were also more likely to have elevated levels of a psychological need for chaos, dark tetrad traits, and paranoia. Our findings illuminate one vector through which misinformation is spread.”





We may need AI to sort out these answers.

https://www.ben-evans.com/benedictevans/2023/8/27/generative-ai-ad-intellectual-property

Generative AI and intellectual property

We’ve been talking about intellectual property in one way or another for at least the last five hundred years, and each new wave of technology or creativity leads to new kinds of arguments. We invented performance rights for composers and we decided that photography - ‘mechanical reproduction’ - could be protected as art, and in the 20th century we had to decide what to think about everything from recorded music to VHS to sampling. Generative AI poses some of those questions in new ways (or even in old ways), but it also poses some new kinds of puzzles - always the best kind.

At the simplest level, we will very soon have smartphone apps that let you say “play me this song, but in Taylor Swift’s voice”. That’s a new possibility, but we understand the intellectual property ideas pretty well - there’ll be a lot of shouting over who gets paid what, but we know what we think the moral rights are. Record companies are already having conversations with Google about this.

But what happens if I say “make me a song in the style of Taylor Swift” or, even more puzzling, “make me a song in the style of the top pop hits of the last decade”?





A summary.

https://www.cpomagazine.com/data-protection/chatgpt-ip-and-privacy-considerations/

ChatGPT – IP and Privacy Considerations

There has been much excitement about ChatGPT since it launched in November 2022. A bellwether for the advance of generative AI, it can chat, create content, code, translate, brainstorm and more. It can even act as a personal assistant or therapist. Its use cases are almost endless.

The advance of AI raises some important questions. Is it the harbinger of the singularity? Will it replace all our jobs? We don’t aim to answer those questions here… Instead, we focus on the potential IP and data protection issues surrounding use of ChatGPT.



Sunday, August 27, 2023

Imagine AI making recommendations during a pandemic. Suggesting that care of the young take precedence over care for those less likely to survive. Isn’t that ethical?

https://journals.sagepub.com/doi/full/10.1177/09685332231193944

The possibility of AI-induced medical manslaughter: Unexplainable decisions, epistemic vices, and a new dimension of moral luck

The use of artificial intelligence (AI) systems in healthcare provides a compelling case for a re-examination of ‘gross negligence’ as the basis for criminal liability. AI is a smart agency, often using self-learning architectures, with the capacity to make autonomous decisions. Healthcare practitioners (HCPs) will remain responsible for validating AI recommendations but will have to contend with challenges such as automation bias, the unexplainable nature of AI decisions, and an epistemic dilemma when clinicians and systems disagree. AI decisions are the result of long chains of sociotechnical complexity with the capacity for undetectable errors to be baked into systems, which introduces a new dimension of moral luck. The ‘advisory’ nature of AI decisions constructs a legal fiction, which may leave HCPs unjustly exposed to the legal and moral consequences when systems fail. On balance, these novel challenges point towards a legal test of subjective recklessness as the better option: it is practically necessary; falls within the historic range of the offence; and offers clarity, coherence, and a welcome reconnection with ethics.





This is close to the type of attack that immediately precedes invasion. The ability to tell amateurs from pros is critical.

https://www.databreaches.net/hackers-bring-down-polands-train-network-in-massive-cyber-attack/

Hackers bring down Poland’s train network in massive cyber attack

Ticker News reports:

Polish intelligence agencies are currently conducting an investigation into a cyberattack that targeted the country’s railway infrastructure, according to reports from Polish media.
The incident, which occurred overnight, involved hackers gaining unauthorized access to railway frequencies, resulting in disruptions to train services in the northwestern region of Poland. The Polish Press Agency (PAP) revealed that during the attack, the hackers broadcasted Russia’s national anthem and a speech by President Vladimir Putin.

Read more at Ticker News.





The problem with “Helpful by design?”

https://www.preprints.org/manuscript/202308.1271/v1

Exploring Ethical Boundaries: Can ChatGPT Be Prompted to Give Advice on How to Cheat in University Assignments?

Generative artificial intelligence (AI), in particular large language models such as ChatGPT have reached public consciousness with a wide-ranging discussion of their capabilities and suitability for various professions. The extant literature on the ethics of generative AI revolves around its usage and application, rather than the ethical framework of the responses provided. In the education sector, concerns have been raised with regard to the ability of these language models to aid in student assignment writing with the potentially concomitant student misconduct of such work is submitted for assessment. Based on a series of ‘conversations’ with multiple replicates, using a range of discussion prompts, this paper examines the capability of ChatGPT to provide advice on how to cheat in assessments. Since its public release in November 2022, numerous authors have developed ‘jailbreaking’ techniques to trick ChatGPT into answering questions in ways other than the default mode. While the default mode activates a safety awareness mechanism that prevents ChatGPT from providing unethical advice, [??? Bob] other modes partially or fully bypass the this mechanism and elicit answers that are outside expected ethical boundaries. ChatGPT provided a wide range of suggestions on how to best cheat in university assignments, with some solutions common to most replicates (‘plausible deniability,’ language adjustment of contract written text’). Some of ChatGPT’s solutions to avoid cheating being detected were cunning, if not slightly devious. The implications of these findings are discussed.