Interesting take.
https://www.tomsguide.com/opinion/how-can-chatgpt-be-the-next-big-thing-if-its-this-broken
How can ChatGPT be the next big thing if it's this broken?
ChatGPT and similar "AI" chatbots are all the rage now within the tech community and industry. As we’ve previously explained, the chatbot AI can perform a number of tasks, including holding a conversation to writing an entire term paper. Microsoft has started integrating ChatGPT technology into products like Bing, Edge and Teams. Google recently announced its Bard AI chatbot, as did You.com. We’re seeing the equivalent of a virtual gold rush. It’s eerily similar to the dot-com boom of the late '90s.
But will the AI chatbot revolution burst like a dot-com bubble? We’re still in the early stages, but we’re already seeing signs that ChatGPT isn’t without its faults. In fact, certain interactions some folks have had with ChatGPT have been downright frightening. While the technology seems to be relatively benign overall, there have been instances that should raise serious concerns.
In this piece, I want to detail the stumbling blocks ChatGPT and similar tech has experienced in recent weeks. While I may briefly discuss future implications, I’m mostly concerned with showing how, at the moment, ChatGPT isn’t the grand revolution some think it is. And while I’ll try to let the examples below speak for themselves, I’ll also give my impressions of ChatGPT in its current state and why I believe people need to view it with more skepticism.
Live with it!
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4354422
Open AI in Education, the Responsible and Ethical Use of ChatGPT Towards Lifelong Learning
Significant changes have been brought about in society, the economy, and the environment as a result of the quick development of technology and the interconnection of the world. Artificial intelligence has advanced significantly in recent years, which has sparked the creation of ground-breaking technologies like Open AI's ChatGPT. Modern technology like the ChatGPT language model has the potential to revolutionize the educational landscape. This article's goals are to present a thorough analysis of the responsible and ethical usage of ChatGPT in education, as well as to encourage further study and debate on this important subject. The study found that the use of ChatGPT in education requires respect for privacy, fairness and non-discrimination, transparency in the use of ChatGPT, and a few other factors that were included in the paper. To sustain ethics and accountability in the global education sector, it is advised in this study that all these recommendations be carried out.
Must we?
https://link.springer.com/article/10.1007/s10676-023-09683-0
The irresponsibility of not using AI in the military
The ongoing debate on the ethics of using artificial intelligence (AI) in military contexts has been negatively impacted by the predominant focus on the use of lethal autonomous weapon systems (LAWS) in war. However, AI technologies have a considerably broader scope and present opportunities for decision support optimization across the entire spectrum of the military decision-making process (MDMP). These opportunities cannot be ignored. Instead of mainly focusing on the risks of the use of AI in target engagement, the debate about responsible AI should (i) concern each step in the MDMP, and (ii) take ethical considerations and enhanced performance in military operations into account. A characterization of the debate on responsible AI in the military, considering both machine and human weaknesses and strengths, is provided in this paper. We present inroads into the improvement of the MDMP, and thus military operations, through the use of AI for decision support, taking each quadrant of this characterization into account.
Maybe so, maybe not?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4355140
The Deepfake Defense—Exploring the Limits of the Law and Ethical Norms in Protecting Legal Proceedings from Lying Lawyers
Thousands of audiovisual images documented the insurrectionists who stormed the United States Capitol on January 6, 2021. Authorities subsequently collected those images and charged some for their criminal conduct. Given the overwhelming audiovisual evidence implicating the insurrectionists, it should be impossible to assert a plausible defense claiming that those unmistakably depicted in the images were not present. Right? Wrong. As the defense in the federal criminal trial of January 6th insurrectionist leader Guy Reffitt illustrated, the emergence of “deepfakes” has changed the landscape of plausible defenses to crimes. Reffitt led the attack on the Capital. Videos and other visual images showed him at the head of the crowd advancing on the Capitol’s West Terrace. He was arrested and charged with multiple crimes. And although the evidence, including audiovisual images, against Reffitt, was clear and overwhelming, his lawyer undermined it, arguing to the jury that the evidence against Reffitt was a “deepfake” – an audiovisual recording created using Artificial Intelligence technology that allows anyone with a smartphone to believably map one person’s movements and words onto the image of another person. Unfortunately, the law does not provide a clear response to Reffitt’s lawyer’s reliance on deepfakes as a defense.
But this much is clear—the “deepfake defense” is a new challenge to our legal system’s adversarial process and truth-seeking function. Because the norms of professional ethics require lawyers to advocate zealously, deepfakes invite lawyers to raise objections and arguments to evidence to exploit juror bias and skepticism about what is real. Thus, lawyers may plant the seeds of doubt in jurors’ minds to question the authenticity of all digital audio and visual images, even those counsel knows to be genuine.
Currently, no rule of procedure, ethics, or legal precedent directly addresses the presentation of the “deepfake defense” in court. The existing standards provide scant guidance because they were developed before the advent of deepfake technology. As a result, they do not solve the concern of how to deter lawyers from exploiting it. Although in the last several years, legal scholarship and the popular news media have addressed certain facets of deepfakes, there has been no in-depth commentary on the “deepfake defense.” This article is the first to explore the deepfake defense, locating it within the historical and current framework of lawyers’ efforts to fabricate evidence and the laws and the practice norms that exist to curb that conduct. It proposes a reconsideration of the ethical rules governing candor, fairness, and the limits of zealous advocacy and urges a re-examination of the court’s role in sanctioning such conduct. Thus, this article offers novel proposals to guide the way forward for lawyers and courts as they traverse this new technological landscape.
The answer is in the question.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4359407
‘Words Are Flowing Out Like Endless Rain Into a Paper Cup’: ChatGPT & Law School Assessments
ChatGPT is a sophisticated large-language model able to answer high-level questions in a way that is undetectable by conventional plagiarism detectors. Concerns have been raised it poses a significant risk of academic dishonesty in ‘take-home’ assessments in higher education. To evaluate this risk in the context of legal education, this project had ChatGPT generate answers to twenty-four different exams from an English-language law school based in a common law jurisdiction. It found that the system performed best on exams that were essay-based and asked students to discuss international legal instruments or general legal principles not necessarily specific to any jurisdiction. It performed worst on exams that featured problem-style or “issue spotting” questions asking students to apply an invented factual scenario to local legislation or jurisprudence. While the project suggests that for the most part conventional law school assessments are for the time being relatively immune from the threat ChatGPT brings, this is unlikely to remain the case as the technology advances. However, rather than attempt to block students from using AI as part of learning and assessment, this paper instead proposes three ways students may be taught to use it in appropriate and ethical ways. While it is clear that ChatGPT and similar AI technologies will change how universities teach and assess (across disciplines), a solution of prevention or denial is no solution at all.
No comments:
Post a Comment