Friday, January 17, 2025

AI likes to gossip…

https://www.bespacific.com/the-ethics-of-advanced-ai-assistants/

The Ethics of Advanced AI Assistants

Google DeepMind – “First, because LLMs display immense modeling power, there is a risk that the model weights encode private information present in the training corpus In particular, it is possible for LLMs to ‘memorise’ personally identifiable information (PII) such as names, addresses and telephone numbers, and subsequently leak such information through generated text outputs (Carlini et al., 2024) This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants. We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user – across one or more domains – in line with the user’s expectations. The paper starts by considering the technology itself, providing an overview of AI assistants, their technical foundations and potential range of applications. It then explores questions around AI value alignment, well-being, safety and malicious uses. Extending the circle of inquiry further, we next consider the relationship between advanced AI assistants and individual users in more detail, exploring topics such as manipulation and persuasion, anthropomorphism, appropriate relationships, trust and privacy. With this analysis in place, we consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants. Finally, we conclude by providing a range of recommendations for researchers, developers, policymakers and public stakeholders. Our analysis suggests that advanced AI assistants are likely to have a profound impact on our individual and collective lives. To be beneficial and value-aligned, we argue that assistants must be appropriately responsive to the competing claims and needs of users, developers and society. Features such as increased agency, the capacity to interact in natural language and high degrees of personalisation could make AI assistants especially helpful to users. However, these features also make people vulnerable to inappropriate influence by the technology, so robust safeguards are needed. Moreover, when AI assistants are deployed at scale, knock-on effects that arise from interaction between them and questions about their overall impact on wider institutions and social processes rise to the fore. These dynamics likely require technical and policy interventions in order to foster beneficial cooperation and to achieve broad, inclusive and equitable outcomes. Finally, given that the current landscape of AI evaluation focuses primarily on the technical components of AI systems, it is important to invest in the holistic sociotechnical evaluations of AI assistants, including human–AI interaction, multi-agent and societal level research, to support responsible decision-making and deployment in this domain.”





Another opinion…

https://www.commerce.gov/news/blog/2025/01/generative-artificial-intelligence-and-open-data-guidelines-and-best-practices

Generative Artificial Intelligence and Open Data: Guidelines and Best Practices

Throughout 2024, the working group published the AI and Open Government Data Assets Request for Information (RFI) and collaborated with AI and data experts across government, the private sector, think tanks, and academia. These efforts resulted in the publishing of the guidance, Generative Artificial Intelligence and Open Data: Guidelines and Best Practices.

This guidance provides actionable guidelines and best practices for publishing open data optimized for generative AI systems. While it is designed for use by the Department of Commerce and its bureaus, this guidance has been made publicly available to benefit open data publishers globally. The first version of the guidance, published on January 16, 2025, is envisioned as a dynamic resource that will be revised and updated with new insights, feedback, and other considerations.





A skill only us ‘old people’ still have?

https://www.bespacific.com/can-you-read-cursive-its-a-superpower-the-national-archives-is-looking-for/

Can you read cursive? It’s a superpower the National Archives is looking for

USA Today: “If you can read cursive, the National Archives would like a word. Or a few million. More than 200 years worth of U.S. documents need transcribing (or at least classifying) and the vast majority from the Revolutionary War era are handwritten in cursive – requiring people who know the flowing, looped form of penmanship. “Reading cursive is a superpower,” said Suzanne Isaacs, a community manager with the National Archives Catalog in Washington, D.C. She is part of the team that coordinates the more than 5,000 Citizen Archivists helping the Archive read and transcribe  some of the more than 300 million digitized objects in its catalog. And they’re looking for volunteers with an increasingly rare skill. Those records range from Revolutionary War pension records to the field notes of Charles Mason of the Mason-Dixon Line  to immigration documents from the 1890s  to Japanese evacuation records  to the 1950 Census. “We create missions where we ask volunteers to help us transcribe or tag records in our catalog,” Isaacs said. To volunteer, all that’s required is to sign up online and then launch in. “There’s no application,” she said. “You just pick a record that hasn’t been done and read the instructions. It’s easy to do for a half hour a day or a week.” Being able to read the longhand script is a huge help because so many of the documents are written using it. “It’s not just a matter of whether you learned cursive in school, it’s how much you use cursive today,” she said…”



Thursday, January 16, 2025

How will we ever discover the technology beyond AI if we stop thinking?

https://www.bespacific.com/ai-tools-in-society-impacts-on-cognitive-offloading-and-the-future-of-critical-thinking/

AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking

Gerlich, M. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking.  Societies 202515, 6. https://doi.org/10.3390/soc15010006. Published: 3 January 2025.

The proliferation of artificial intelligence (AI) tools has transformed numerous aspects of daily life, yet its impact on critical thinking remains underexplored. This study investigates the relationship between AI tool usage and critical thinking skills, focusing on cognitive offloading as a mediating factor. Utilising a mixed-method approach, we conducted surveys and in-depth interviews with 666 participants across diverse age groups and educational backgrounds. Quantitative data were analysed using ANOVA and correlation analysis, while qualitative insights were obtained through thematic analysis of interview transcripts. The findings revealed a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants. Furthermore, higher educational attainment was associated with better critical thinking skills, regardless of AI usage. These results highlight the potential cognitive costs of AI tool reliance, emphasising the need for educational strategies that promote critical engagement with AI technologies. This study contributes to the growing discourse on AI’s cognitive implications, offering practical recommendations for mitigating its adverse effects on critical thinking. The findings underscore the importance of fostering critical thinking in an AI-driven world, making this research essential reading for educators, policymakers, and technologists.”





Did I miss the tipping point?

https://www.axios.com/2025/01/15/americans-use-ai-products-poll

Nearly all Americans use AI, though most dislike it, poll shows

The vast majority of Americans use products that involve AI, but their views of the technology remain overwhelmingly negative, according to a Gallup-Telescope survey published Wednesday.



Wednesday, January 15, 2025

I have to ask this question again, are we engaged in the first World E-War? If so, is the FBI the right agency to deal with it?

https://techcrunch.com/2025/01/14/doj-confirms-fbi-operation-that-mass-deleted-chinese-malware-from-thousands-of-us-computers/

The Department of Justice and the FBI said on Tuesday that they had successfully deleted the malware planted by the China-backed hacking group, known as “Twill Typhoon” or “Mustang Panda,” from thousands of infected systems across the United States during a court-authorized operation in August 2024. 

French authorities led the operation with assistance from Paris-based cybersecurity company Sekoia. In a press release last year, French prosecutors said the malware — known as “PlugX” — had infected several million computers globally, including 3,000 devices located in France. 

U.S. authorities said that the operation was used to delete the malware from more than 4,200 infected computers in the United States.





A point! Definitely a point.

https://www.ft.com/content/917c9535-1cdb-4f6a-9a15-1a0c83663bfd

The coming battle between social media and the state

The notion that all we need to make the world a better place is “better regulation” is deeply embedded in our culture. And one thing for which the cry for regulation is made is social media platforms. If only they were “better regulated”, the popular sentiment goes, then various political and social problems would all be solved.

But there are two problems with regulating social media platforms. The first comes from the very technology that gave rise to this fairly recent but now almost ubiquitous phenomenon. The second is that to impose effective regulation against unwilling platforms will require determined, unflinching governmental action and political will — the possibility of which the platforms are now doing what they can to avoid.





Is this likely to become more common in today’s environment?

https://news.bloomberglaw.com/ip-law/meta-lawyer-lemley-quits-ai-case-citing-zuckerberg-descent

Meta Lawyer Lemley Quits AI Case Citing Zuckerberg 'Descent' (1)

California attorney Mark Lemley dropped Meta Platforms Inc. as a client in a high-profile copyright case because of CEO Mark Zuckerberg’s “descent into toxic masculinity and Neo-Nazi madness,” the Stanford University professor said on LinkedIn.

Lemley said in a Monday post he still believes Meta to be “on the right side in the generative AI copyright dispute,” but that he “cannot in good conscience serve as their lawyer any longer.” Zuckerberg has generated controversy in recent days by ending diversity initiatives at the social media giant and ending fact-checking on Facebook posts while expounding the benefits of “masculine energy.”



Tuesday, January 14, 2025

AI should be able to give similar answers to questions about any privacy law, right?

https://pogowasright.org/new-jersey-division-of-consumer-affairs-publishes-privacy-law-faqs/

New Jersey Division of Consumer Affairs Publishes Privacy Law FAQs

Hunton Andrews Kurth points us to a resource on New Jersey data privacy law:

On January 6, 2025, the New Jersey Division of Consumer Affairs Cyber Fraud Unit published a set of frequently asked questions and answers (“FAQs”) on the New Jersey Data Privacy Law (“NJDPL”).  The FAQs are intended for the convenience of business that may be subject to the law and cover topics such as “What is ‘personal data’?” and “What rights does the NJDPL protect?”.  The FAQs reiterate that small businesses and non-profits are subject to the NJDPL if they meet the law’s applicability thresholds. The FAQs also state that the Division of Consumer Affairs will issue regulations in 2025. The NJDPL becomes effective January 15, 2025.





Is this a strength or a weakness?

https://techxplore.com/news/2025-01-key-ai-power-inbuilt-special.html#google_vignette

Researchers find the key to AI's learning power—an inbuilt, special kind of Occam's razor

A study from Oxford University has uncovered why the deep neural networks (DNNs) that power modern artificial intelligence are so effective at learning from data.

The findings demonstrate that DNNs have an inbuilt "Occam's razor," meaning that when presented with multiple solutions that fit training data, they tend to favor those that are simpler. What is special about this version of Occam's razor is that the bias exactly cancels the exponential growth of the number of possible solutions with complexity.

The Paper: https://www.nature.com/articles/s41467-024-54813-x



Monday, January 13, 2025

What do you expect from ‘made up’ data?

https://theconversation.com/tech-companies-are-turning-to-synthetic-data-to-train-ai-models-but-theres-a-hidden-cost-246248

Tech companies are turning to ‘synthetic data’ to train AI models – but there’s a hidden cost

A primary concerns is that AI models can “collapse” when they rely too much on synthetic data. This means they start generating so many “hallucinations” – a response that contains false information – and decline so much in quality and performance that they are unusable.

For example, AI models already struggle with spelling some words correctly. If this mistake-riddled data is used to train other models, then they too are bound to replicate the errors.

Synthetic data also carries a risk of being overly simplistic. It may be devoid of the nuanced details and diversity found in real datasets, which could result in the output of AI models trained on it also being overly simplistic and less useful.





Should we train AI to make more “human like” mistakes?

https://spectrum.ieee.org/ai-mistakes-schneier

AI Mistakes Are Very Different Than Human Mistakes

Humans make mistakes all the time. All of us do, every day, in tasks both new and routine. Some of our mistakes are minor and some are catastrophic. Mistakes can break trust with our friends, lose the confidence of our bosses, and sometimes be the difference between life and death.

Over the millennia, we have created security systems to deal with the sorts of mistakes humans commonly make. These days, casinos rotate their dealers regularly, because they make mistakes if they do the same task for too long. Hospital personnel write on limbs before surgery so that doctors operate on the correct body part, and they count surgical instruments to make sure none were left inside the body. From copyediting to double-entry bookkeeping to appellate courts, we humans have gotten really good at correcting human mistakes.

Humanity is now rapidly integrating a wholly different kind of mistake-maker into society: AI. Technologies like large language models (LLMs) can perform many cognitive tasks traditionally fulfilled by humans, but they make plenty of mistakes. It seems ridiculous when chatbots tell you to eat rocks or add glue to pizza. But it’s not the frequency or severity of AI systems’ mistakes that differentiates them from human mistakes. It’s their weirdness. AI systems do not make mistakes in the same ways that humans do.

Much of the friction—and risk—associated with our use of AI arise from that difference.  We need to invent new security systems that adapt to these differences and prevent harm from AI mistakes.



Sunday, January 12, 2025

Interesting. Perhaps AI is coming closer to its hype…

https://www.oneusefulthing.org/p/prophecies-of-the-flood

Prophecies of the Flood

Recently, something shifted in the AI industry. Researchers began speaking urgently about the arrival of supersmart AI systems, a flood of intelligence. Not in some distant future, but imminently. They often refer to AGI - Artificial General Intelligence - defined, albeit imprecisely, as machines that can outperform expert humans across most intellectual tasks. This availability of intelligence on demand will, they argue, change society deeply and will change it soon.

There are plenty of reasons to not believe insiders as they have clear incentives to make bold predictions: they're raising capital, boosting stock valuations, and perhaps convincing themselves of their own historical importance. They're technologists, not prophets, and the track record of technological predictions is littered with confident declarations that turned out to be decades premature. Even setting aside these human biases, the underlying technology itself gives us reason for doubt. Today's Large Language Models, despite their impressive capabilities, remain fundamentally inconsistent tools - brilliant at some tasks while stumbling over seemingly simpler ones. This “jagged frontier” is a core characteristic of current AI systems, one that won't be easily smoothed away

Plus, even assuming researchers are right about reaching AGI in the next year or two, they are likely overestimating the speed at which humans can adopt and adjust to a technology. Changes to organizations take a long time. Changes to systems of work, life, and education, are slower still. And technologies need to find specific uses that matter in the world, which is itself a slow process. We could have AGI right now and most people wouldn’t notice (indeed, some observers have suggested that has already happened, arguing that the latest AI models like Claude 3.5 are effectively AGI1).





New technology, new crimes?

https://www.cambridge.org/core/journals/cambridge-forum-on-ai-law-and-governance/article/generative-ai-and-criminal-law/CFBB64250CAC6A338A5504F0F41C54AB

Generative AI and criminal law

Several criminal offenses can originate from or culminate with the creation of content. Sexual abuse can be committed by producing intimate materials without the subject’s consent, while incitement to violence or self-harm can begin with a conversation. When the task of generating content is entrusted to artificial intelligence (AI), it becomes necessary to explore the risks of this technology. AI changes criminal affordances because it creates new kinds of harmful content, it amplifies the range of recipients, and it can exploit cognitive vulnerabilities to manipulate user behavior. Given this evolving landscape, the question is whether policies aimed at fighting Generative AI-related harms should include criminal law. The bulk of criminal law scholarship to date would not criminalize AI harms on the theory that AI lacks moral agency. Even so, the field of AI might need criminal law, precisely because it entails a moral responsibility. When a serious harm occurs, responsibility needs to be distributed considering the guilt of the agents involved, and, if it is lacking, it needs to fall back because of their innocence. Thus, legal systems need to start exploring whether and how guilt can be preserved when the actus reus is completely or partially delegated to Generative AI.





Some good bad examples?

https://commons.allard.ubc.ca/fac_pubs/2793/

Artificial Intelligence & Criminal Justice: Cases and Commentary

When I was given the chance to develop a seminar this year at UBC’s Peter A. Allard School of Law, I jumped at the opportunity to develop something new and engaging. After brainstorming ideas with students, it quickly became evident that there was substantial interest and enthusiasm for a seminar on the growing integration of artificial intelligence and the criminal justice system.

Embarking on this journey has been a steep learning curve for me as my students and I worked together to shape the course along with input from generative AI tools like ChatGPT, Gemini and Perplexity, along with open-source materials from the Canadian Legal Information Institute and the Creative Commons search portal.

Delving into the case law in Canada and the U.S., reading the critical commentary, listening to podcasts and webinars, and playing around with the latest AI tools has been a lot of fun, but also made me realize how crucial it is at this point in time to have a focussed critical exploration of the benefits and risks of AI in the criminal justice context.

I hope that this open access casebook will be a valuable resource for students, instructors, legal practitioners and the public, offering insights into how AI is already influencing various aspects of the criminal justice lifecycle – including criminality and victimization, access to justice, policing, lawyering, adjudication, and corrections. If you’re interested in a quick overview of topics covered in this casebook, you can download the companion: Artificial Intelligence Criminal Justice: A Primer  2024).





Attempts to be ethical.

https://www.researchgate.net/profile/Robert-Smith-169/publication/387723862_The_Top_10_AI_Ethics_Frameworks_Shaping_the_Future_of_Artificial_Intelligence/links/67795c65894c55208542eda3/The-Top-10-AI-Ethics-Frameworks-Shaping-the-Future-of-Artificial-Intelligence.pdf

The Top 10 AI Ethics Frameworks: Shaping the Future of Artificial Intelligence

The rapid advancement of artificial intelligence (AI) has created unprecedented opportunities and challenges, particularly in addressing ethical concerns surrounding its deployment. At the center of these discussions is the dual focus on enforcing ethical principles through robust regulation and embedding ethics as an intrinsic aspect of AI development. This article critically examines the top 10 AI ethics frameworks, each offering unique principles and guidelines to ensure AI's responsible and equitable impact on society. The frameworks explored range from regulatory models and philosophical paradigms to practical governance structures, reflecting the global effort to align AI innovation with the values of fairness, accountability, transparency, and societal benefit. By analysing their contributions, implications, and limitations, this article provides a comprehensive overview of humanity’s collective endeavour to navigate the ethical complexities of AI and foster technologies that prioritize inclusivity, sustainability, and well-being.





Another opinion…

https://azure.microsoft.com/en-us/blog/explore-the-business-case-for-responsible-ai-in-new-idc-whitepaper/

Explore the business case for responsible AI in new IDC whitepaper

I am pleased to introduce Microsoft’s commissioned whitepaper with IDC: The Business Case for Responsible AI. This whitepaper, based on IDC’s Worldwide Responsible AI Survey sponsored by Microsoft, offers guidance to business and technology leaders on how to systematically build trustworthy AI. In today’s rapidly evolving technological landscape, AI has emerged as a transformative force, reshaping industries and redefining the way businesses operate. Generative AI usage jumped from 55% in 2023 to 75% in 2024; the potential for AI to drive innovation and enhance operational efficiency is undeniable.1  However, with great power comes great responsibility. The deployment of AI technologies also brings with it significant risks and challenges that must be addressed to ensure responsible use.



Saturday, January 11, 2025

Could significantly skew your understanding of your customer base.

https://www.zdnet.com/article/ai-agents-may-soon-surpass-people-as-primary-application-users/

AI agents may soon surpass people as primary application users

Tomorrow's application users may look quite different than what we know today -- and we're not just talking about more GenZers. Many users may actually be autonomous AI agents.   

That's the word from a new set of predictions for the decade ahead issued by Accenture, which highlights how our future is being shaped by AI-powered autonomy. By 2030, agents -- not people -- will be the "primary users of most enterprises' internal digital systems," the study's co-authors state. By 2032, "interacting with agents surpasses apps in average consumer time spent on smart devices."