A worthwhile comparison?
https://wiredspace.wits.ac.za/items/c7d937a3-db8e-440e-acb5-bebf38505684
Artificial intelligence and automated decision making under the GDPR and the POPIA
This analysis considers the concepts of AI and machine learning and examines their reliance on the processing of personal data and the challenges this poses from a data- privacy and human-rights perspective, particularly in relation to profiling. It evaluates the effectiveness of the General Data Protection Regulation (GDPR) and the Promotion of Personal Information Act 4 of 2013 (POPIA) in regulating Automated Decision Making (ADM) and considers the limitations of the right to an explanation under these provisions. The analysis proposes that the current framework of the GDPR and POPIA does not clearly address the issue of explainability and that the focus should shift to providing a data subject with a counterfactual to give practical effect to this right which would better serve data subjects
Probably false.
https://hrcak.srce.hr/en/326914
Some (Wittgensteinian) Remarks on the Ethics of Artificial Intelligence
I argue in favor of a distinction between human understanding and machine “understanding”. Based on Wittgenstein’s view on machines and his considerations on understanding, I aim to demonstrate that no machine with artificial intelligence can reach functional equality with human beings. In particular, this also holds for ethical praxis because it consists of an extremely blurred net of language– games, guided by ethical rules. Therefore, a machine can never have the human ability (disposition) to act ethically and cannot be a moral agent.
Understanding the arguments?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5097441
Not The End Of Lawyers, But A Beginning The Place Of Entrepreneurship And Innovation In Legal Ethics
In questioning, if not declaring, the demise of the legal profession of the 20th century, Richard Susskind’s The End of Lawyers? Rethinking the Nature of Legal Services (Oxford University Press, 2008) set the stage for a new subfield of legal ethics devoted to entrepreneurship and innovation. A scholar of law and technology, not an ethicist, Susskind nonetheless forced regulators of professional ethics to consider whether or not to address obligations related to technology-driven evolution in the delivery of legal services. In the decade-plus since the first publication of the book, numerous jurisdictions have adopted ethical rules that require lawyers to maintain competence in the risks and benefits of changes in law practice, many of which Susskind forecast. Similarly, a growing number of scholars and commentators now include the ethics of legal services delivery under the umbrella of legal ethics generally.
This book chapter situates Susskind’s book in the context of the international canon of legal ethics and validates it as a leading work. In doing so, the chapter is partially autobiographical in nature, discussing the parallel evolution of Renee Knake Jefferson’s own work creating a law laboratory devoted to technology, entrepreneurship, and innovation to prepare future lawyers for the world Susskind predicts. Susskind often criticizes lawyers for failing to recognize and provide what their clients actually want. For example, he tells the story about the sale of drills. People don’t buy drills because they want drills, he says, they do so because they want holes. But Susskind is only partially correct; people don’t want holes. They want artwork or photos hung on the nail that fills the hole, and the feeling they have when the see that art or photo hanging on the wall. Similarly, the conclusions he reaches in The End of Lawyers? are only partially correct. Viewing his book as one about legal ethics, not simply the nature of legal services, reveals critical gaps and omissions that this chapter aims to fill.
Wishing makes it so?
Anthropomorphizing AI: Dire consequences of mistaking human-like for human have already emerged
In our rush to understand and relate to AI, we have fallen into a seductive trap: Attributing human characteristics to these robust but fundamentally non-human systems. This anthropomorphizing of AI is not just a harmless quirk of human nature — it is becoming an increasingly dangerous tendency that might cloud our judgment in critical ways. Business leaders are comparing AI learning to human education to justify training practices to lawmakers crafting policies based on flawed human-AI analogies. This tendency to humanize AI might inappropriately shape crucial decisions across industries and regulatory frameworks.
Viewing AI through a human lens in business has led companies to overestimate AI capabilities or underestimate the need for human oversight, sometimes with costly consequences. The stakes are particularly high in copyright law, where anthropomorphic thinking has led to problematic comparisons between human learning and AI training.
Does this make you want to go back to school? (Me neither.)
Can AI Condense Two Years of Learning Into Six Weeks?
In a modest classroom in Edo State, Nigeria, an educational revolution unfolded. Over six weeks, students accomplished what would typically take two years. This wasn’t a product of extra hours or an elite teaching corps. It was the result of generative AI—a large language model serving as a virtual tutor in an after-school program. The pilot program, supported by the World Bank and published on their website, delivered remarkable results: students made significant strides in English, digital literacy, and even foundational AI concepts. The numbers are extraordinary, but the story is even more compelling. Here, in a Nigeria classroom, we caught a glimpse of how AI might redefine learning for millions worldwide.