Data as the new oil?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4116921
A Perspective on Fairness in Artificial Intelligence
Data is the weapon of the future. Whoever controls data, controls the world… If we don’t put up a fight, our data will belong to the wrong people – The Billion Dollar Code Machine Learning Algorithms: An Overview
Best to keep your AI happy…
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4115647
Both/And - Why Robots Should not Be Slaves
One solution to the exclusive person/thing dichotomy that persists in both moral philosophy and law is slavery. Already in Roman times, slaves were regarded as something more than a mere thing but not quite a full person. They occupied a liminal position that was situated in between the one and the other, being both thing and person. And there has been, in both the legal and philosophical literature, a surprising number of serious proposals arguing for instituting what can only be called Slavery 2.0. This chapter provides a thorough critique of these “robots should be slaves” proposals, demonstrating how this supposed solution to the person/thing dichotomy actually produces more and significantly worse problems than it can possibly begin to resolve.
Can machines be ethical?
https://www.researchgate.net/profile/Fatih-Esen/publication/360655432_The_Trust_in_the_Usage_of_Artificial_Intelligence_in_Social_Media_and_Traditional_Mass_Media/links/6283cb7eb2548471fee261d2/The-Trust-in-the-Usage-of-Artificial-Intelligence-in-Social-Media-and-Traditional-Mass-Media.pdf#page=76
DIMENSIONS AND LIMITATIONS OF AI ETHICS
Ethics of AI is a new field in philosophy of technology addressing ethical issues raised by various emerging technologies under the umbrella term of “artificial intelligence” (AI). The notion of “artificial intelligence” broadly understood is any kind of artificial (semi)autonomous system that shows forms of intelligent behaviour in achieving a goal. Initially, intelligent behaviour in machines had to simulate human cognitive faculties, such as symbolic manipulation, logical reasoning, abstract thinking, learning, decision-making, and more (McCarthy et al. 1955: 2), but current understanding of AI incorporates wider range of automatic artificial agents, which excel at particular narrowly defined tasks.
… Traditionally, moral behaviour required rational determination of the will (Kant 2015), so only human beings were expected to bear moral responsibility and rights. From this perspective, technologies have been understood as passive and neutral instruments, whose use by humans could be ethical or unethical.
You have got to be kidding. Who defines the moral?
https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780190881931.001.0001/oxfordhb-9780190881931-e-53
Moral Bioenhancement and Future Generations: Selecting Martyrdom?
Moral bioenhancement is a biological modification or intervention which makes moral behavior more likely. There are a number of ways this could potentially be achieved, including pharmaceuticals, non-invasive brain stimulation, or genetic engineering. Moral bioenhancement can be distinguished from other kinds of enhancement because it primarily benefits others, rather than just the individual who has been enhanced. With the challenges that will face future generations, such as climate change and the rise of artificial intelligence, it is even more important to address the possibilities of moral bioenhancement. In this chapter, the authors examine rationales for moral bioenhancement, the possibilities of these technologies, and some common critiques and concerns. The authors defend the view that moral bioenhancement may be a useful tool—one tool among many—to help future generations respond to these aforementioned challenges. The authors suggest that an application of the non-identity problem to genetic selection may help resolve some of the concerns surrounding moral bioenhancement.
No comments:
Post a Comment