Sunday, July 30, 2023

Implications, all in one place?

https://www.researchgate.net/profile/Yaser-Jasim-2/publication/372572580_The_Ethical_Implications_of_ChatGPT_AI_Chatbot_A_Review/links/64bedbf9c41fb852dd98c995/The-Ethical-Implications-of-ChatGPT-AI-Chatbot-A-Review.pdf

The Ethical Implications of ChatGPT AI Chatbot: A Review

This paper analyses the ethical implications of ChatGPT AI chatbot, a popular natural language processing model. The study gives a background and literature analysis of artificial intelligence (AI) ethics, ethical considerations for chatbot and conversational agents, and existing research on the ethical implications of ChatGPT AI after presenting the technology and describing the complexities of its ethical implications. The section on ethical implications examines possible issues such as privacy concerns; bias; fairness issues, malicious usage, and the influence on human interaction and social skills. The paper then criticizes present ethical rules and regulations and recommends modifications for ChatGPT AI ethical principles. Case studies and examples provide the moral quandaries in ChatGPT AI chatbot usage, successful ethical implementations, and lessons gained. This review outlines the relevance of ethical issues in the processes of construction and deployment of the ChatGPT AI chatbot, the necessity for a multidisciplinary approach to handle its moral implications, and the last thoughts and recommendations for ethical implementation.





Copyright is even more screwed up than I thought.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4517702

How Generative Ai Turns Copyright Law on its Head

While courts are litigating many copyright issues involving generative AI, from who owns AI-generated works to the fair use of training to infringement by AI outputs, the most fundamental changes generative AI will bring to copyright law don't fit in any of those categories. The new model of creativity generative AI brings puts considerable strain on copyright’s two most fundamental legal doctrines: the idea-expression dichotomy and the substantial similarity test for infringement. Increasingly creativity will be lodged in asking the right questions, not in creating the answers. Asking questions may sometimes be creative, but the AI does the bulk of the work that copyright traditionally exists to reward, and that work will not be protected. That inverts what copyright law now prizes. And because asking the questions will be the basis for copyrightability, similarity of expression in the answers will no longer be of much use in proving the fact of copying of the questions. That means we may need to throw out our test for infringement, or at least apply it in fundamentally different ways.





Are we going to train AI to lie for us?

https://scholarship.law.wm.edu/incorporating_chatgpt/schedule/fullschedule/9/

Scheherazade, ChatGPT, and Me: Storytelling and AI

Humans developed language to tell stories. Gesturing, demonstration, and vocalization worked for communicating instructions or basic information. But establishing and maintaining community required story, and story required language. Our desire to tell better stories and share them more widely has led to the creation of art forms from simple guitar ballads to epic motion pictures and intricate first-person video games. So it’s no wonder that, in the era of generative artificial intelligence, storytellers would be among the first to put AI to work. Storytellers have been using AI for years already to develop stories, which means that AI has itself become an accomplished storyteller. However, the stories that generative AI tells are not usually constrained by a factual record and legal precedent the way that legal stories are. It’s no wonder then that generative AI is not yet ubiquitous as a storytelling tool for lawyers. But it will be. In this presentation, I will model a process for training ChatGPT-4 on a factual record and relevant law. I will then model “coaching” ChatGPT-4 to generate a sequence of drafts, each one a better, more compelling draft of a trial or appellate Statement of Facts. Time permitting, I will also demonstrate how to use ChatGPT-4 tendency to “hallucinate” to draft assignments, including curated hypothetical facts, in seconds.





Ethics can be good? What a concept!

https://www.sciencedirect.com/science/article/abs/pii/S0267364923000626

On defense of “ethification” of law: How ethics may improve compliance with the EU digital laws

In recent years, academics and professionals witness the rise of the “ethification” of law, specifically in the area of ICT law. Ethification shall be understood as a proliferation of moral principles and moral values in the legal discourse within the areas of research, innovation governance, or directly enforceable rules in the industry. Although the ethical considerations may seem distant from mere regulatory compliance, the opposite is true. The article focuses on the positive side of the “ethification” of digital laws through the lens of legal requirements for impact assessments pursuant to General Data Protection Regulation and conformity assessments in the proposal for the Artificial Intelligence Act. Authors argue that ethical considerations are often absent in the context of using new technologies including artificial intelligence, yet they may provide additional value for organizations and society as a whole. Additionally, carrying out ethics-based assessments is already in line with existing regulatory requirements in the fields of data protection law and proposed EU AI regulation. These arguments are reflected in the context of facial recognition technology, where both data protection impact assessment under the EU General Data Protection Regulation and conformity assessment under the proposal of the EU Artificial Intelligence Act will be mandatory. Facial recognition technology is analyzed through the ethics-based assessment involving stakeholder analysis, data flows map, and identification of risks and respective countermeasures to show additional insights that ethics provides beyond regulatory requirements.





There is something here… Is it AI’s fault if not everyone keeps up?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4518510

How AI Unfairly Tilts the Playing Field: Privacy, Fairness, and Risk Shadows

Private sector applications of artificial intelligence (AI) raise related questions of informational privacy and fairness. Fairness requires that market competition occurs on a level playing field, and uses of AI unfairly tilt the field. Informational privacy concerns arise because AI tilts the playing field by taking information about activities in one area of one’s life and using it in ways that impose novel risks in areas not formerly associated with such risks. The loss of control over that information constitutes a loss of informational privacy. To illustrate both the fairness and privacy issues, imagine, for example, that Sally declares bankruptcy after defaulting on $50,000 of credit card debt. She incurred the debt by paying for lifesaving medical treatment for her eight-year-old daughter. Post-bankruptcy Sally is a good credit risk. Her daughter has recovered, and her sole-proprietor business is seeing increased sales. Given her bankruptcy, however, an AI credit scoring system predicts that she is a poor risk and assigns her a low score. That low credit score casts a shadow that falls on her when her auto insurance company, which uses credit scores in its AI system as a measure of the propensity to take risks, raises her premium. Is it fair that saving her daughter’s life should carry with it the risk—realized in this case—of a higher premium? The pattern is not confined to credit ratings and insurance premiums. AI routinely creates risk shadows.

We address fairness questions in two steps. First, we turn to philosophical theories of fairness as equality of opportunity to spell out the content behind our metaphor of tilting the playing field. Second, we address the question of how, when confronted with a mathematically complex AI system, one can tell whether the system meets requirements of fairness. We answer by formulating three conditions whose violation makes a system presumptively unfair. The conditions provide a lens that reveals relevant features when policy makers and regulators investigate complex systems. Our goal is not to resolve fairness issues but to contribute to the creation of a forum in which legal regulators and affected parties can work to resolve them. The third of our three condition requires that systems incorporate contextual information about individual consumers, and we conclude by raising the question of whether our suggested approach to fairness significantly reduces informational privacy. We do not answer the question but emphasize that fairness and informational privacy questions can closely intertwine.



No comments: