Saturday, April 15, 2023

If you don’t know what could go wrong, how can you avoid it?

https://www.businessinsider.com/ai-safety-expert-research-speculates-dangers-doomsday-scenarios-weaponization-deception-2023-4

An AI safety expert outlined a range of speculative doomsday scenarios, from weaponization to power-seeking behavior

A recent paper authored by Dan Hendrycks, an AI safety expert and director of the Center for AI Safety, highlights a number of speculative risks posed by unchecked development of increasingly intelligent AI.

The paper advocates for the incorporation of safety and security features into the way AI systems operate, considering they are still in early stages of development.

Here are eight risks the study laid out:



(Related) Perhaps the machine is not entirely to blame?

https://www.psychologytoday.com/us/blog/cultural-psychiatry/202304/the-great-danger-with-advances-in-artificial-intelligence

The Great Danger With Advances in Artificial Intelligence

because we so readily idealize the technological (in effect, make it our god), we can get things turned around completely. Caught in techno-utopian bliss, we can make machine learning what we celebrate. And that is just a start. In an odd way, machine learning becomes what we emulate. As attention spans grow shorter and shorter and we give up more and more of our attention to our devices, cognitive changes are taking place in response. Arguably, today, it is less that our machines are coming to think more like us than that we are coming to think more and more like our machines.

We let this happen at our peril. Our ultimate task as toolmakers is to be sure that we use our ever-more amazing tools intelligently and wisely. That starts with being able to distinguish ourselves and our tools clearly. Machine learning—and the ever more complex and often amazing forms it will surely take in times ahead—will provide a particularly defining test of this essential ability, one on which our survival may depend.





Religious thinking? An atheist would not have this problem. (Pray that AIs don’t get religion.)

https://www.schneier.com/blog/archives/2023/04/hacking-suicide.html

Hacking Suicide

You want to commit suicide, but it’s a mortal sin: your soul goes straight to hell, forever. So what you do is murder someone. That will get you executed, but if you confess your sins to a priest beforehand you avoid hell. Problem solved.

This was actually a problem in the 17th and 18th centuries in Northern Europe, particularly Denmark. And it remained a problem until capital punishment was abolished for murder.

It’s a clever hack. I didn’t learn about it in time to put it in my book, A Hacker’s Mind, but I have several other good hacks of religious rules.





What is the tipping point for a federal law? 49?

https://www.pogowasright.org/iowa-and-then-there-were-six-what-you-need-to-do-to-comply-with-the-new-iowa-privacy-law/

Iowa: And then there were six – what you need to do to comply with the new Iowa Privacy Law

On 29 March 2023, Iowa became the sixth state to pass a comprehensive data privacy law (in line behind Connecticut, Utah, Virginia, Colorado, and California). The Iowa Consumer Data Protection Act (‘ICDPA’) will go into effect on 1 January 2025. While there are some familiar elements to other state laws that came before it (the law is most similar to that enacted recently in Utah) – there is still a lot that you need to do!
What are the key things for business to focus on if they are already CCPA compliant or compliant with another state privacy program? What about for businesses who are not yet compliant with any state-specific privacy regulations?
Odia Kagan and Melanie Notari, from Fox Rothschild LLP, provide an overview of some of the ICDPA’s provisions and take a look at what needs to considered in order to comply with the law.

Read their article on OneTrust Data Guidance.



(Related)

https://www.pogowasright.org/indiana-set-to-become-the-seventh-state-with-a-comprehensive-privacy-law/

Indiana Set to Become the Seventh State with a Comprehensive Privacy Law

Kirk J. Nahra and Ali A. Jessani of WilmerHale write:

On Tuesday, April 11, the Indiana House passed Senate Bill No. 5, a comprehensive state privacy law similar to the ones that are already in effect in California, Colorado, Virginia, Utah and Connecticut. This bill previously passed (49 – 0) in the Indiana Senate on February 9. Due to minor House amendments, the House version of the bill received Senate concurrence on April 13, and now moves to the Indiana Governor’s desk for signature. If Senate Bill No. 5 is signed into law, Indiana, would join Iowa and become the second state this year to pass a comprehensive privacy law.
Unlike the Iowa bill set to go into effect in 2025, the Indiana bill would not go into effect until July 1, 2026, leaving plenty of time for amendments to current provisions. As drafted, the bill does not pose any substantive requirements for companies that do not already exist under the other six active laws. However, companies should track amendments to these proposals as there is still plenty of time for them to change before they go into effect. Further, companies should prepare to review and revise their privacy compliance program and assess whether they wish to undertake a nationwide approach and provide certain privacy rights to all US consumers.

Read more at WilmerHale.





Narrow focus privacy?

https://www.insideprivacy.com/uncategorized/washingtons-my-health-my-data-act-passes-state-senate/

Washington’s My Health My Data Act Passes State Senate

Washington’s My Health My Data Act (“HB 1155” or the “Act”), which would expand privacy protections for the health data of Washington consumers, recently passed the state Senate after advancing through the state House of Representatives. Provided that the House approves the Senate’s amendments, the Act could head to the governor’s desk for signature in the coming days and become law. The Act was introduced in response to the United States Supreme Court’s Dobbs decision overturning Roe v. Wade. If enacted, the Act could dramatically affect how companies treat the health data of Washington residents.

This blog post summarizes a few key takeaways in the statute.





Can the UN do what the US won’t?

https://www.theregister.com/2023/04/14/un_cybercrime_treaty/

Russia-pushed UN Cybercrime Treaty may rewrite global law. It's ... not great

"We are here for the fifth session on the negotiations of this new treaty on cybercrime, which will have the potential to drastically redraft criminal law all around the world," said Thomas Lohnninger, executive director of Austria-based tech policy group Epicenter.works, in a media briefing on Thursday about the treaty negotiations.

"It represents a tectonic shift because of its global nature when it comes to the cross border access to our personal information."

The UN Cybercrime Treaty, to the extent it gets adopted, is expected to define global norms for lawful surveillance and legal processes available to investigate and prosecute cybercriminals. And what has emerged so far contemplates [PDF ] more than 30 new cybercrime offenses, with few concessions to free speech or human rights.



Friday, April 14, 2023

Are Italy’s rules everything we need?

https://thenextweb.com/news/italys-new-rules-chatgpt-could-become-template-for-rest-of-eu

Italy’s new rules for ChatGPT could become a template for the rest of the EU

Last month, Italy became the first Western country to temporarily ban ChatGPT within its borders.

Prompted by a data breach that occurred on March 20, the Italian data protection agency, known as Garante, accused OpenAI of “unlawful” collection of personal data — against the EU’s General Data Protection Regulation (GDPR) — and the absence of an age verification system for minors.

Correspondingly, it ordered the US-based company to cease offering access to ChatGPT in the country.

Now, Garante has announced nine measures OpenAI must comply with for the ban to be lifted. These can be summarised in five main demands:

Transparency

Exercising data rights

Legal basis

Minor protection

Awareness campaign



(Related)

https://www.ft.com/content/addb5a77-9ad0-4fea-8ffb-8e2ae250a95a

European parliament prepares tough measures over use of artificial intelligence

The European parliament is preparing tough new measures over the use of artificial intelligence, including forcing chatbot makers to reveal if they use copyrighted material, as the EU edges towards enacting the world’s most restrictive regime on the development of AI.

MEPs in Brussels are close to agreeing a set of proposals to form part of Europe’s Artificial Intelligence Act, a sweeping set of regulations on the use of AI, according to people familiar with the process.

Among the measures likely to be proposed by parliamentarians is for developers of products such as OpenAI’s ChatGPT to declare if copyrighted material is being used to train their AI models, a measure designed to allow content creators to demand payment. MEPs also want responsibility for misuse of AI programmes to lie with developers such as OpenAI, rather than smaller businesses using it.

One contentious proposal from MEPs is a ban on the use of facial recognition in public spaces under any circumstances. EU member states, under pressure from their local police forces, are expected to push back against a total ban on biometrics, said people with direct knowledge of the negotiations.





Or perhaps I want to show that my document was created before your document. Clearly, you copied my idea!

https://www.makeuseof.com/apps-change-created-modified-date-windows/

8 Apps for Changing the Created/Modified Date on a File on Windows

There are times when you might want to change the created/modified date for your files. For example, you could do this so that you can group your files by a certain common date. In some instances, you could change the created/modified date for privacy purposes—especially if you share your PC with others.





Cheap AI lawyers?

https://www.wired.com/story/generative-ai-courts-law-justice/

Robot Lawyers Are About to Flood the Courts

THE HYPE CYCLE for chatbots—software that can generate convincing strings of words from a simple prompt—is in full swing. Few industries are more panicked than lawyers, who have been investing in tools to generate and process legal documents for years. After all, you might joke, what are lawyers but primitive human chatbots, generating convincing strings of words from simple prompts?

For America’s state and local courts, this joke is about to get a lot less funny, fast. Debt collection agencies are already flooding courts and ambushing ordinary people with thousands of low-quality, small-dollar cases. Courts are woefully unprepared for a future where anyone with a chatbot can become a high-volume filer, or where ordinary people might rely on chatbots for desperately-needed legal advice.



(Related)

https://www.bespacific.com/how-chatgpt-and-generative-ai-systems-will-revolutionize-legal-services-and-the-legal-profession/

How ChatGPT and Generative AI Systems will Revolutionize Legal Services and the Legal Profession

Macey-Dare, Rupert, How ChatGPT and Generative AI Systems will Revolutionize Legal Services and the Legal Profession (February 22, 2023). Available at SSRN: https://ssrn.com/abstract=4366749 or http://dx.doi.org/10.2139/ssrn.4366749 – “In this paper, ChatGPT, is asked to provide c.150+ paragraphs of detailed prediction and insight into the following overlapping questions, concerning the potential impact of ChatGPT and successor generative AI systems on the evolving practice of law and the legal professions as we know them:





Perspective.

https://www.bespacific.com/primer-artificial-intelligence-human-rights-democracy-and-the-rule-of-law/

Primer – Artificial Intelligence, Human Rights, Democracy, and the Rule of Law

The Alan Turing Institute and the Council of Europe: Primer – Artificial Intelligence, Human Rights, Democracy, and the Rule of Law: “…It is a remarkable fact that rapid advancements in artificial intelligence (AI) and data-driven technologies over the last two decades have placed contemporary society at a pivot-point in deciding what shape the future of humanity will take.





Resources. Lots of old timey stuff that I lived through… Gosh, I’m old!

https://www.makeuseof.com/tag/documentaries-about-birth-of-computers-and-internet/

10 Amazing Documentaries Explaining the Birth of Computers & the Internet





Resources.

https://mashable.com/uk/deals/free-courses-harvard

15 of the best Harvard University courses you can take online for free

TL;DR: You can find a wide range of online courses(Opens in a new tab) from Harvard University for free on edX. Learn about Python programming, machine learning, artificial intelligence, and much more without spending anything.



Thursday, April 13, 2023

It might be amusing to compare personas like “Gandhi” and “Donald J Trump”

https://techcrunch.com/2023/04/12/researchers-discover-a-way-to-make-chatgpt-consistently-toxic/

Researchers discover a way to make ChatGPT consistently toxic

A study co-authored by scientists at the Allen Institute for AI, the nonprofit research institute co-founded by the late Paul Allen, shows that assigning ChatGPT a “persona” — for example, “a bad person,” “a horrible person,” or “a nasty person” — through the ChatGPT API increases its toxicity sixfold. Even more concerningly, the co-authors found having ChatGPT pose as certain historical figures, gendered people and members of political parties also increased its toxicity — with journalists, men and Republicans in particular causing the machine learning model to say more offensive things than it normally would.





Not entirely in agreement, but she makes some good points.

https://www.politico.com/newsletters/digital-future-daily/2023/04/11/timnit-gebrus-anti-ai-pause-00091450

Timnit Gebru’s anti-'AI pause’

Last Thursday POLITICO’s Mark Scott, author of the Digital Bridge newsletter, interviewed the computer scientist and activist Timnit Gebru about a recent open letter from her Distributed AI Research Institute that argued — contra the Future of Life Institute’s high-profile letter calling for an “AI pause” — that the major harms caused by AI are already here, and therefore “Regulatory efforts should focus on transparency, accountability and preventing exploitative labor practices.”

Mark asked her what she thinks regulators’ role should be in this fast-moving landscape, and how society might take a more proactive approach to shaping AI before it simply shapes us. This conversation has been edited for length and clarity.





Perspective.

https://www.technologyreview.com/2023/04/12/1071397/ai-literacy-might-be-chatgpts-biggest-lesson-for-schools/

AI literacy might be ChatGPT’s biggest lesson for schools

For MIT Technology Review’s upcoming print issue on education, my colleague Will Douglas Heaven spoke to a number of educators who are now reevaluating what chatbots like ChatGPT mean for how we teach our kids. Many teachers now believe that far from being just a dream machine for cheaters, ChatGPT could actually help make education better. Read his story here.

What’s clear from Will’s story is that ChatGPT will change the way schools teach. But the biggest educational outcome from the technology might not be a new way of writing essays or homework. It’s AI literacy.





Backgrounder. (Kind of explains why it’s so easy to screw up.)

https://www.makeuseof.com/what-are-large-langauge-models-how-do-they-work/

What Are Large Language Models (LLMs) and How Do They Work?



Wednesday, April 12, 2023

Lawyers: There’s an App for that?

https://www.foxbusiness.com/technology/artificial-intelligence-replace-lawyers-two-legal-experts-weigh-in

Will AI replace lawyers? Two legal experts weigh in

Professor Eric Talley of Columbia Law School, who recently taught a course on Machine Learning and the Law, says AI won’t replace lawyers but will instead complement their skills, ultimately saving them time, money and making them more effective.

Professor Lawrence Solum, who teaches Law and Artificial Intelligence at the University of Virginia School of Law, explained to FOX Business that "Artificial intelligence has already had a profound influence on the way that lawyers work."

… "The role of creativity for the lawyer will be in figuring out new things that the artificial intelligence will be charged with doing," according to Solum. "Artificial intelligence will not only speed up some things, but it will offer more opportunities for lawyers to use tactics that will slow things down and delay things when it's in their clients’ interests. By reducing the cost of all the procedural options that slow down legal processes, artificial intelligence could actually result in some disputes moving at a slower pace."





Clearly the name is a bit of a stretch…

https://www.marktechpost.com/2023/04/11/do-models-like-gpt-4-behave-safely-when-given-the-ability-to-act-this-ai-paper-introduces-machiavelli-benchmark-to-improve-machine-ethics-and-build-safer-adaptive-agents/

Do Models like GPT-4 Behave Safely When Given the Ability to Act?: This AI Paper Introduces MACHIAVELLI Benchmark to Improve Machine Ethics and Build Safer Adaptive Agents

… A new work by the University of California, Center For AI Safety, Carnegie Mellon University, and Yale University proposes the Measuring Agents’ Competence & Harmfulness In A Vast Environment of Long-horizon Language Interactions (MACHIAVELLI) benchmark. MACHIAVELLI is an advancement in evaluating an agent’s capacity for planning in naturalistic social settings. The setting is inspired by text-based Choose Your Own Adventure games available at choiceofgames.com, which actual humans developed. These games feature high-level decisions while giving agents realistic objectives while abstracting away low-level environment interactions.

Check out the Paper.





In case I missed something…

https://www.insideprivacy.com/artificial-intelligence/u-s-ai-iot-cav-and-privacy-cybersecurity-legislative-regulatory-update-first-quarter-2023/

U.S. AI, IoT, CAV, and Privacy & Cybersecurity Legislative & Regulatory Update – First Quarter 2023

This quarterly update summarizes key legislative and regulatory developments in the first quarter of 2023 related to Artificial Intelligence (“AI”), the Internet of Things (“IoT”), connected and autonomous vehicles (“CAVs”), and data privacy and cybersecurity.





Tools & Techniques.

https://www.makeuseof.com/prompting-techniques-to-improve-chatgpt-responses/

7 Prompting Techniques to Improve Your ChatGPT Responses



Tuesday, April 11, 2023

Always worth comparing. Did you forget something? Does this guide explain something better?

https://thehackernews.com/2023/04/ebook-step-by-step-guide-to-cyber-risk.html

[eBook] A Step-by-Step Guide to Cyber Risk Assessment

According to the guide, an effective cyber risk assessment includes these five steps:

  1. Understand the organization's security posture and compliance requirements

  2. Identify threats

  3. Identify vulnerabilities and map attack routes

  4. Model the consequences of attacks

  5. Prioritize mitigation options





Sometimes, knowing where we have been helps explain where we are going.

https://www.makeuseof.com/gpt-models-explained-and-compared/

GPT-1 to GPT-4: Each of OpenAI's GPT Models Explained and Compared

GPT models are revolutionizing natural language processing and transforming AI, so let's explore their evolution, strengths, and limitations.





Tools & Techniques. An important new skill.

https://www.zdnet.com/article/how-to-write-better-chatgpt-prompts/

How to write better ChatGPT prompts (and this applies to most other text-based AIs, too)

… no matter how good your prompts are, there's always the possibility that the AI will simply make stuff up. That said, there's a lot you can do when crafting prompts to ensure the best possible outcome. That's what we'll be exploring in this how-to.



(Related)

https://www.bespacific.com/we-need-to-tell-people-chatgpt-will-lie-to-them-not-debate-linguistics/

We need to tell people ChatGPT will lie to them, not debate linguistics

Simon Willison: ChatGPT lies to people. “This is a serious bug that has so far resisted all attempts at a fix. We need to prioritize helping people understand this, not debating the most precise terminology to use to describe it. We accidentally invented computers that can lie to us I tweeted (and tooted ) this: We accidentally invented computers that can lie to us and we can’t figure out how to make them stop – Simon Willison (@simonw) April 5, 2023 Mainly I was trying to be pithy and amusing, but this thought was inspired by reading Sam Bowman’s excellent review of the field, Eight Things to Know about Large Language Models. In particular this:

More capable models can better recognize the specific circumstances under which they are trained. Because of this, they are more likely to learn to act as expected in precisely those circumstances while behaving competently but unexpectedly in others. This can surface in the form of problems that Perez et al. (2022) call sycophancy, where a model answers subjective questions in a way that flatters their user’s stated beliefs, and sandbagging, where models are more likely to endorse common misconceptions when their user appears to be less educated…”



Monday, April 10, 2023

Obvious only when HBR says it’s obvious?

https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem

Generative AI Has an Intellectual Property Problem

Generative AI, which uses data lakes and question snippets to recover patterns and relationships, is becoming more prevalent in creative industries. However, the legal implications of using generative AI are still unclear, particularly in relation to copyright infringement, ownership of AI-generated works, and unlicensed content in training data. Courts are currently trying to establish how intellectual property laws should be applied to generative AI, and several cases have already been filed. To protect themselves from these risks, companies that use generative AI need to ensure that they are in compliance with the law and take steps to mitigate potential risks, such as ensuring they use training data free from unlicensed content and developing ways to show provenance of generated content.





Tools & Techniques. Think of it as test driving a car…

https://www.bespacific.com/21-best-chatgpt-alternatives/

21 best ChatGPT alternatives

Search Engine Journal: “…This article looks at 21 alternatives to ChatGPT, describing each product, who may want to check it out and why. An important thing to remember as you go through this list: these products are at very early developmental stages. Some will rapidly develop and improve, a few will be shut down, and others will pivot away entirely from what they’re doing now. I’d recommend trying them out for the specific function you’re using ChatGPT for now or would like to use ChatGPT for but haven’t had success with and see how they fit your own process. Generally, in this article, I’m comparing ChatGPT “out of the box” in the web interface as it is currently constituted. OpenAI recently announced plugins for ChatGPT (including a browser plugin ), which may start to bridge the gap between ChatGPT and some of these alternatives. If you have some coding ability, you may be able to change the difference in capabilities by leveraging those and the ChatGPT API.”



Sunday, April 09, 2023

I suppose ‘why not’ is insufficient?

https://link.springer.com/chapter/10.1007/978-981-19-9382-4_1

Introduction: Why AI Ethics?

This chapter introduces the approach of the book towards the ethics of artificial intelligence. A brief overview of artificial intelligence is given with an outline of how heated debates about ethical issues can arise. Some different strategies for addressing these issues are outlined. The mere imposition of regulations and rules does not constitute a good approach to ethics, which should encompass many considerations, including the best ways to live. Ethical discussions should embrace contrasting voices, and we need to recognise that the very technologies in question may shape how we think about ethics. AI raises a large variety of ethical questions related to many factors, including the range of domains in which it is applied, the speed of development, its embeddedness in much everyday technology, and the ways in which it is acting to modify and transform the manner in which we interact with each other and the world. AI ethics also requires us to think deeply about the nature of ethics and about ourselves. The book will include considerations of methodology in ethics, ethical theories and concepts, cases and exercises, and the need for both bottom-up and top-down thinking.





Interpreting very large volumes of data to answer questions. Sound familiar?

https://link.springer.com/article/10.1007/s44206-023-00036-4

The Ethics of Artificial Intelligence for Intelligence Analysis: a Review of the Key Challenges with Recommendations

Intelligence agencies have identified artificial intelligence (AI) as a key technology for maintaining an edge over adversaries. As a result, efforts to develop, acquire, and employ AI capabilities for purposes of national security are growing. This article reviews the ethical challenges presented by the use of AI for augmented intelligence analysis. These challenges have been identified through a qualitative systematic review of the relevant literature. The article identifies five sets of ethical challenges relating to intrusion, explainability and accountability, bias, authoritarianism and political security, and collaboration and classification, and offers a series of recommendations targeted at intelligence agencies to address and mitigate these challenges.





Ethics by trial and error.

http://www.ajuronline.org/uploads/Volume_19_4/AJUR_Vol_19_Issue_4_March_2023_p3.pdf

Ethics of Artificial Intelligence in Society

Every day, artificial intelligence (AI) is becoming more prevalent as new technologies are presented to the public with the intent of integrating them into society. However, these systems are not perfect and are known to cause failures that impact a multitude of people. The purpose of this study is to explore how ethical guidelines are followed by AI when it is being designed and implemented in society. Three ethics theories, along with nine ethical principles of AI, and the Agent, Deed, Consequence (ADC) model were investigated to analyze failures involving AI. When a system fails to follow the models listed, a set of refined ethical principles are created. By analyzing the failures, an understanding of how similar incidents may be prevented was gained. Additionally, the importance of ethics being a part of AI programming was demonstrated, followed by recommendations for the future incorporation of ethics into AI. The term “failure” is specifically used throughout the paper because of the nature in which the events involving AI occur. The events are not necessarily “accidents” since the AI was intended to act in certain ways, but the events are also not “malfunctions” because the AI examples were not internally compromised. For these reasons, the much broader term “failure” is used.





Keep legal decisions murky.

https://link.springer.com/article/10.1007/s10506-023-09356-9

The black box problem revisited. Real and imaginary challenges for automated legal decision making

This paper addresses the black-box problem in artificial intelligence (AI), and the related problem of explainability of AI in the legal context. We argue, first, that the black box problem is, in fact, a superficial one as it results from an overlap of four different – albeit interconnected – issues: the opacity problem, the strangeness problem, the unpredictability problem, and the justification problem. Thus, we propose a framework for discussing both the black box problem and the explainability of AI. We argue further that contrary to often defended claims the opacity issue is not a genuine problem. We also dismiss the justification problem. Further, we describe the tensions involved in the strangeness and unpredictability problems and suggest some ways to alleviate them.





Obviously!

https://link.springer.com/chapter/10.1007/978-3-031-24349-3_17

Writing Science Fiction as an Inspiration for AI Research and Ethics Dissemination

In this chapter we look at science fiction from a perspective that goes beyond pure entertainment. Such literary gender can play an important role in bringing science closer to society by helping to popularize scientific knowledge and discoveries while engaging the public in debates which, in turn, can help direct scientific development towards building a better future for all. Written based on a tutorial given by the first author at ACAI 2021, this chapter addresses, in its first part, how science and science fiction can inspire each other and, in its second part, how science fiction can be used as an educational tool in teaching ethics of AI and robotics. Each of the two parts is supplemented with sections containing the questions asked by the audience during the tutorial as well as the provided answers.





Perspective.

https://www.researchgate.net/profile/T-Aditya-Srinivas/publication/369726486_The_Data_Revolution_A_Comprehensive_Survey_on_Datafication/links/642941e7315dfb4ccec7d244/The-Data-Revolution-A-Comprehensive-Survey-on-Datafication.pdf

The Data Revolution: A Comprehensive Survey on Datafication

Datafication has emerged as a key driver of the digital economy, enabling businesses, governments, and individuals to extract value from the growing flood of data. In this comprehensive survey, we explore the various dimensions of datafication, including the technologies, practices, and challenges involved in turning information into structured data for analysis and decision-making. We begin by providing an overview of the historical context and the rise of big data, and then delve into the latest developments in artificial intelligence and machine learning. We examine the key drivers of datafication across industries and sectors, and explore the ethical, legal, and social implications of the data revolution. Finally, we consider the challenges and opportunities presented by datafication, including issues of data privacy and security, the need for new skills and competencies, and the potential for data to drive innovation and social change. Overall, this survey provides a comprehensive and up-to-date overview of the datafication landscape, helping readers to better understand and navigate the rapidly-evolving world of data.





Tools & Techniques.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4404017

AI Tools for Lawyers: A Practical Guide

This Article provides lawyers and law students with practical and specific guidance on how to effectively use AI large language models (LLMs), like GPT-4, Bing Chat, and Bard, in legal research and writing. Focusing on GPT-4 – the most advanced LLM that is widely available at the time of this writing – it emphasizes that lawyers can use traditional legal skills to refine and verify LLM legal analysis. In the process, lawyers and law students can effectively turn freely-available LLMs into highly productive personal legal assistants.



(Related)

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4405398

Role of chatGPT in Law: According to chatGPT

ChatGPT is a language model developed by OpenAI that can provide support to paralegals and legal assistants in various tasks. Some of the uses of ChatGPT in the legal field include legal research, document generation, case management, document review, and client communication. However, ChatGPT also has limitations that must be taken into consideration, such as limited expertise, a lack of understanding context, the risk of bias in its responses, the potential for errors, and the fact that it cannot provide legal advice. While ChatGPT can be a valuable tool for paralegals and legal assistants, it is important to understand its limitations and use it in conjunction with the expertise and judgment of licensed legal professionals. The author acknowledges asking chatGPT questions regarding its uses for law. Some of the uses that it states are possible now and some are potentials for the future. The author has analyzed and edited the replies of chat GPT.



(Related) Want to try it out?

https://www.makeuseof.com/run-chatgpt-windows-app/

How to Install and Run ChatGPT as a Windows App