A new field of law.
https://doi.org/10.4324/9781003028031
Drone Law and Policy
Drone Law and Policy describes the drone industry and its evolution, describing the benefits and risks of its exponential growth. It outlines the current and proposed regulatory framework in Australia, the United States, the United Kingdom and Europe, taking into consideration the current and evolving technological and insurance landscape.
This book makes recommendations as to additional regulatory and insurance initiatives which the authors believe are necessary to achieve an effective balance between the various competing interests.
(Related)
Risk of discrimination in AI systems
Using artificial intelligence (AI) systems within decision-making processes is neither a new nor an unfamiliar concept. If used correctly, algorithmic decision-making has the potential to be both beneficial and valuable to modern society. Yet the risk of discrimination resulting from the use of these automated systems is substantial and widely acknowledged. As such, this work considers instances in which algorithmic systems have discriminated against individuals based upon protected characteristics, why this is the case, and what can be done to prevent this from continuing to occur. However, the purpose of this chapter is to evaluate the effectiveness of current legal safeguards, such as General Data Protection Regulation (GDPR), the European Convention of Human Rights (ECHR), and the Equality Act 2010, in dealing with AI-based discrimination. In particular, this chapter features analysis of the first known legal challenge to the use of algorithms in the UK brought by The Joint Council for the Welfare of Immigrants and Foxglove and considers the effectiveness of the some of the aforementioned legal safeguards in this case.
No one got it right. (and that might be the model going forward. )
https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.12965
Emerging Consensus on ‘Ethical AI’: Human Rights Critique of Stakeholder Guidelines
Voluntary guidelines on ‘ethical practices’ have been the response by stakeholders to address the growing concern over harmful social consequences of artificial intelligence and digital technologies. Issued by dozens of actors from industry, government and professional associations, the guidelines are creating a consensus on core standards and principles for ethical design, development and deployment of artificial intelligence (AI). Using human rights principles (equality, participation and accountability) and attention to the right to privacy, this paper reviews 15 guidelines preselected to be strongest on human rights, and on global health. We find about half of these ground their guidelines in international human rights law and incorporate the key principles; even these could go further, especially in suggesting ways to operationalize them. Those that adopt the ethics framework are particularly weak in laying out standards for accountability, often focusing on ‘transparency’, and remaining silent on enforceability and participation which would effectively protect the social good. These guidelines mention human rights as a rhetorical device to obscure the absence of enforceable standards and accountability measures, and give their attention to the single right to privacy. These ‘ethics’ guidelines, disproportionately from corporations and other interest groups, are also weak on addressing inequalities and discrimination. We argue that voluntary guidelines are creating a set of de facto norms and re-interpretation of the term ‘human rights’ for what would be considered ‘ethical’ practice in the field. This exposes an urgent need for action by governments and civil society to develop more rigorous standards and regulatory measures, grounded in international human rights frameworks, capable of holding Big Tech and other powerful actors to account.
I love a good argument. Particularly when two AIs are arguing.
On the application of ethical guidelines for AI and the challenges from value conflicts
The aim of this article is to articulate and critically discuss different answers to the following question: How should decision-makers deal with conflicts that arise when the values usually entailed in ethical guidelines – such as accuracy, privacy, nondiscrimination and transparency – for the use of Artificial Intelligence (e.g. algorithm-based sentencing) clash with one another?
Do you (or your AI) agree?
https://www.digitaltrends.com/features/trend-analyzing-ai-predicts-next-big-thing-tech/
Here’s what a trend-analyzing A.I. thinks will be the next big thing in tech
Virtual and augmented reality. 3D printing. Natural language processing. Deep learning. The smart home. Driverless vehicles. Biometric technology. Genetically modified organisms. Brain-computer interfaces.
These, in descending order, are the top 10 most-invested-in emerging technologies in the United States, as ranked by number of deals. If you want to get a sense of which technologies will be shaping our future in the years to come, this probably isn’t a bad starting point.
There is real elegance in a simple, foolproof design.
https://dilbert.com/strip/2021-06-27
No comments:
Post a Comment