AI likes to gossip…
https://www.bespacific.com/the-ethics-of-advanced-ai-assistants/
The Ethics of Advanced AI Assistants
Google DeepMind – “First, because LLMs display immense modeling power, there is a risk that the model weights encode private information present in the training corpus In particular, it is possible for LLMs to ‘memorise’ personally identifiable information (PII) such as names, addresses and telephone numbers, and subsequently leak such information through generated text outputs (Carlini et al., 2024) This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants. We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user – across one or more domains – in line with the user’s expectations. The paper starts by considering the technology itself, providing an overview of AI assistants, their technical foundations and potential range of applications. It then explores questions around AI value alignment, well-being, safety and malicious uses. Extending the circle of inquiry further, we next consider the relationship between advanced AI assistants and individual users in more detail, exploring topics such as manipulation and persuasion, anthropomorphism, appropriate relationships, trust and privacy. With this analysis in place, we consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants. Finally, we conclude by providing a range of recommendations for researchers, developers, policymakers and public stakeholders. Our analysis suggests that advanced AI assistants are likely to have a profound impact on our individual and collective lives. To be beneficial and value-aligned, we argue that assistants must be appropriately responsive to the competing claims and needs of users, developers and society. Features such as increased agency, the capacity to interact in natural language and high degrees of personalisation could make AI assistants especially helpful to users. However, these features also make people vulnerable to inappropriate influence by the technology, so robust safeguards are needed. Moreover, when AI assistants are deployed at scale, knock-on effects that arise from interaction between them and questions about their overall impact on wider institutions and social processes rise to the fore. These dynamics likely require technical and policy interventions in order to foster beneficial cooperation and to achieve broad, inclusive and equitable outcomes. Finally, given that the current landscape of AI evaluation focuses primarily on the technical components of AI systems, it is important to invest in the holistic sociotechnical evaluations of AI assistants, including human–AI interaction, multi-agent and societal level research, to support responsible decision-making and deployment in this domain.”
Another opinion…
Generative Artificial Intelligence and Open Data: Guidelines and Best Practices
… Throughout 2024, the working group published the AI and Open Government Data Assets Request for Information (RFI) and collaborated with AI and data experts across government, the private sector, think tanks, and academia. These efforts resulted in the publishing of the guidance, Generative Artificial Intelligence and Open Data: Guidelines and Best Practices.
This guidance provides actionable guidelines and best practices for publishing open data optimized for generative AI systems. While it is designed for use by the Department of Commerce and its bureaus, this guidance has been made publicly available to benefit open data publishers globally. The first version of the guidance, published on January 16, 2025, is envisioned as a dynamic resource that will be revised and updated with new insights, feedback, and other considerations.
A skill only us ‘old people’ still have?
Can you read cursive? It’s a superpower the National Archives is looking for
USA Today: “If you can read cursive, the National Archives would like a word. Or a few million. More than 200 years worth of U.S. documents need transcribing (or at least classifying) and the vast majority from the Revolutionary War era are handwritten in cursive – requiring people who know the flowing, looped form of penmanship. “Reading cursive is a superpower,” said Suzanne Isaacs, a community manager with the National Archives Catalog in Washington, D.C. She is part of the team that coordinates the more than 5,000 Citizen Archivists helping the Archive read and transcribe some of the more than 300 million digitized objects in its catalog. And they’re looking for volunteers with an increasingly rare skill. Those records range from Revolutionary War pension records to the field notes of Charles Mason of the Mason-Dixon Line to immigration documents from the 1890s to Japanese evacuation records to the 1950 Census. “We create missions where we ask volunteers to help us transcribe or tag records in our catalog,” Isaacs said. To volunteer, all that’s required is to sign up online and then launch in. “There’s no application,” she said. “You just pick a record that hasn’t been done and read the instructions. It’s easy to do for a half hour a day or a week.” Being able to read the longhand script is a huge help because so many of the documents are written using it. “It’s not just a matter of whether you learned cursive in school, it’s how much you use cursive today,” she said…”