As I have mentioned more than once…
https://researcharchive.lincoln.ac.nz/items/b2e4080f-1047-428b-879f-7d825952d9ac
15 ways LLMs could ruin scholarly communication - and what we can do about it
Despite the dreams of science-fiction fans worldwide, the thing being marketed as "artificial intelligence" is no more than high-powered predictive text. What it gets right is thanks to its input data created by billions of humans, and to an invisible and underpaid workforce of content moderators. What it gets wrong threatens privacy, exacerbates sexism, racism and other inequities, and may even be environmentally damaging. There are situations that are well enough defined that machine models can be useful, but scholarly communication by its nature is full of new and unique information, relying on precisely reported data, that algorithms based on probabilities can't deal with. So as a community we need to come with ways to prevent machine-generated fake papers from poisoning the well of science - and we need to be healthily sceptical of vendors selling us machine-based solutions to problems that can still only be addressed by human intelligence.
As much as we trust the law?
https://onlinelibrary.wiley.com/doi/full/10.1111/rego.12568
Regulating for trust: Can law establish trust in artificial intelligence?
The current political and regulatory discourse frequently references the term “trustworthy artificial intelligence (AI).” In Europe, the attempts to ensure trustworthy AI started already with the High-Level Expert Group Ethics Guidelines for Trustworthy AI and have now merged into the regulatory discourse on the EU AI Act. Around the globe, policymakers are actively pursuing initiatives—as the US Executive Order on Safe, Secure, and Trustworthy AI, or the Bletchley Declaration on AI showcase—based on the premise that the right regulatory strategy can shape trust in AI. To analyze the validity of this premise, we propose to consider the broader literature on trust in automation. On this basis, we constructed a framework to analyze 16 factors that impact trust in AI and automation more broadly. We analyze the interplay between these factors and disentangle them to determine the impact regulation can have on each. The article thus provides policymakers and legal scholars with a foundation to gauge different regulatory strategies, notably by differentiating between those strategies where regulation is more likely to also influence trust on AI (e.g., regulating the types of tasks that AI may fulfill) and those where its influence on trust is more limited (e.g., measures that increase awareness of complacency and automation biases). Our analysis underscores the critical role of nuanced regulation in shaping the human-automation relationship and offers a targeted approach to policymakers to debate how to streamline regulatory efforts for future AI governance.
My field. Looks like we have made a start...
https://repository.tudelft.nl/islandora/object/uuid:5a54364d-6642-46f0-929f-0d3ba72c23f5
Auditing Artificial Intelligence
Recent technological advancements have enabled the development of increasingly impactful and complex Artificial Intelligence (AI) systems. This complexity is paired with a trade-off in terms of system opacity. The resulting lack of understanding combined with reported algorithm scandals have decreased public trust in AI systems. Meanwhile, the AI risk mitigation field is maturing. One of the proposed mechanisms to incentivize the verifiable development of trustworthy AI systems is the AI audit: the external assessment of AI systems.
The AI audit is an emerging subdomain of the Information Technology (IT) audit, a standardized practice carried out by accountants. Contrary to the IT audit, there are currently no AI-specific defined rules and regulations to adhere to. At the same time, some organizations are already seeking external assurance from accountancy firms on their AI systems. AI auditors have indicated that this has lead to challenges in their current audit approach, mainly due to a lack of structure. Therefore, this thesis proposes an AI audit workflow comprised of a general AI auditing framework combined with a structured scoping approach.
Interviews with AI auditors at one accountancy firm in the Netherlands revealed that the demand for AI audits is increasing and expected to keep growing. Clients mainly seek assurance for management of stakeholders and reputation. Furthermore, the challenges the auditors currently experience stem from having to aggregate auditing questions from a range of auditing frameworks, causing issues in their recombination and in determining question relevancy. Subsequently, design criteria for a general auditing framework as well as feedback on a proposed scoping approach were obtained.
Fourteen AI auditing frameworks were identified through a literature search. Following their typology, these could be subdivided into three source categories: academic, industry, and auditing/regulatory. Academic frameworks typically focused on specific aspects of trustworthy AI, while industry frameworks emphasized the need for public trust to drive AI progress. Frameworks developed by auditing and regulatory organizations tended to be most extensive.
A new metric to laugh about?
P(doom) is AI’s latest apocalypse metric. Here’s how to calculate your score
… P(doom) officially stands for “probability of doom,” and as its name suggests, it refers to the odds that artificial intelligence will cause a doomsday scenario.
… The scale runs from zero to 100, and the higher you score yourself, the more you’re convinced that AI is not only willing to eliminate humankind, if necessary, but in fact, is going to succeed at carrying out that task.
No comments:
Post a Comment