A data security topic.
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
As machine learning systems grow in scale, so do their training data requirements, forcing practitioners to automate and outsource the curation of training data in order to achieve state-of-the-art performance. The absence of trustworthy human supervision over the data collection process exposes organizations to security vulnerabilities; training data can be manipulated to control and degrade the downstream behaviors of learned models. The goal of this work is to systematically categorize and discuss a wide range of dataset vulnerabilities and exploits, approaches for defending against these threats, and an array of open problems in this space. In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.
(Related)
Assessing Information Quality in IoT Forensics: Theoretical Framework and Model Implementation
IoT technologies pose serious challenges to digital Forensics. The acquisition of digital evidence is hindered by the number and extreme variety of IoT items, often lacking physical interfaces, connected in unprotected networks, feeding data to uncontrolled cloud services. In this paper we address “Information Quality” in IoT Forensics, taking into account diferent levels of complexity
… After drawing a theoretical framework on data quality and information quality, we focus on forensic analysis challenges in IoT environments, providing a use case of evidence collection for investigative purposes. At the end, we propose a formal framework for assessing information quality of IoT devices for Forensics analysis.
For my Data Governance students.
Three 2021 trends in data governance, for firms, to bolster digital transformation and sustain
… Three principles and associated formalized dimensions that Financial services can focus on -
1. Formalizing Data Collection from customers & third parties
2. Increased data awareness & Literacy
3. Data distribution Management
For my techie-lawyer friends.
https://www.tandfonline.com/doi/abs/10.1080/09695958.2020.1857765
Unlocking the potential of AI for English law
This paper discusses how digital technologies including artificial intelligence (AI) reshape the work of lawyers and the organisations that they work for. We overview how AI is being used in legal services, and identify three distinct impacts: AI substitutes automatable legal tasks; AI enhances productivity of lawyers giving advice on the basis of AI-generated outputs; and legal expertise itself augments the deployment of AI when lawyers work as part of a multi-disciplinary team (MDT) encompassing a range of relevant professional expertise. Our survey of English solicitors shows that AI deployment is associated with MDTs, and that MDTs are less prevalent in law firms than in corporations. This latter finding is due to challenges that law firms face as mono-professional partnerships. We find evidence from our interviews that their challenges lie not so much in capital constraints, relaxed via alternative business structures in the UK, but in traditional law firms’ inability to recruit and retain talent other than those in the legal profession. Inadequate adaption is occurring in law firms shifting their structure from a funnel shape to a rocket shape with junior lawyers in partnership tournament working alongside a growing number of non-lawyers whose career paths offer no prospect of partnership.
Before we let them loose...
https://journal.fi/ta/article/view/101300
Ethical issues in the use of artificial intelligence in military command, especially in the use of deadly autonomous weapon systems
The ethics of warfare and military leadership must pay attention to the rapidly increasing use of artificial intelligence and machines. Who is responsible for the decisions made by a machine? Do machines make decisions? May they make them? These issues are of particular interest in the context of Lethal Autonomous Weapon Systems (LAWS). Are they autonomous or just automated? Do they violate the international humanitarian law which requires that humans must always be responsible for the use of lethal force and for the assessment that civilian casualties are proportionate to the military goals?
… The article’s argument is that the question of responsibility is most naturally perceived by abandoning the most controversial philosophical considerations and simply stating that an individual or a group of people is always responsible for the creation of the equipment they produce and use.
(Related)
https://dergipark.org.tr/tr/pub/auhfd/issue/58829/848714
AN EXPERIMENT ON THE LEGAL AND CRIMINAL LIABILITY OF ROBOTS
Roman law did not grant rights to slaves (D. 4.5.3.1: “Servile caput nullum ius habet”). In the legal system in question, slaves were considered as things that can be included in the property of free people (res). However, because they are human beings, they are kept separate from other things that can be included in their property. When the situation of the slaves in Roman law is examined in detail, it can be seen that their situation is similar to the robots we believe will be a part of our lives in the near future. The slave can be defined as a kind of emotional commodity that can think and make decisions by itself. The same features will be seen in the robots of our future. In other words, it is possible to evaluate robots as technological relatives of slaves in terms of their legal status. All this information If there is a legal regulation for robots, it leads to the conclusion that this regulation can easily be taken from Roman law. In this study, we gave detailed information about the concepts of slaves and robots in order to enlighten why robots and slaves can be considered in the same status. After defining and analyzing the concepts of robot and artificial intelligence in detail, we examined the situation of slaves and compared them and discussed the legal responsibility of robots. Then, we evaluated the criminal liability of robots under the title of punishing artificial intelligence and shared our ideas on this issue. After defining and analyzing the concepts of robot and artificial intelligence in detail, we examined the situation of slaves and compared them and discussed the legal responsibility of robots. Then, we evaluated the criminal liability of robots under the title of punishing artificial intelligence and shared our ideas on this issue.
(Related)
https://www.sciencedirect.com/science/article/abs/pii/S0265964620300503
The Advent of Artificial Intelligence in Space Activities: New Legal Challenges
Artificial Intelligence (AI) – the ability of a computer or a robot to perform tasks commonly associated with intelligent beings – represents both a new challenge and a significant opportunity for the future of space activities. Indeed, increasing connectivity and symbiotic interactions between humans and intelligent machines raise significant questions for the rule of law and contemporary ethics, including the applicable rules relating to liability in case of damage arising from advanced AI. AI also encompasses a series of complex issues that cut across social, economic, public policy, technological, legal, ethical and national security boundaries. The development of AI-based autonomous systems is equally relevant in the context of military operations and on the battlefield, particularly with the use of drones and, more controversially, Lethal Autonomous Weapons Systems. After outlining the legal and ethical challenges posed by this technology, this article focuses on AI systems for space operations that give rise to questions about how these interact with existing legal concepts and technical standards. This article also describes how space law is relevant and applicable to AI use in the context of space missions. The specific attributes of autonomous space systems may also require further consideration as to the traditional application of the authorization of space missions, the international responsibility of States and the liability regime in case of damage. As a precursor to more detailed research in the future, this article seeks to introduce some of the more significant legal issues that AI-driven automated processes might pose for space operations.
Perhaps a bit of an exaggeration, but since it is already in my local library it might be worth a read.
Hitting the Books: What do we want our AI-powered future to look like?
Ethical decision making must be a cornerstone of the coming robot revolution.
Once the shining city on a hill that the rest of the world looked to for leadership and guidance, America’s moral high ground has steadily eroded in recent decades — and rapidly accelerated since Trump’s corrupt, self-dealing tenure in the White House began. Our corporations, and the technologies they develop, are certainly no better. Amazon treats its workers like indentured servants at best, Facebook algorithms actively promotes genocide overseas and fascism here in the States, and Google doesn’t even try to live up to its own maxim of “don’t be evil” anymore.
In her upcoming book, The Power of Ethics: How to Make Good Choices in a Complicated World, Susan Liautaud, Chair of Council of the London School of Economics and Political Science, lays out an ambitious four-step plan to recalibrate our skewed moral compass illustrating how effective ethical decision making can be used to counter damage done by those in power and create a better, fairer and more equitable world for everyone.
No comments:
Post a Comment