Sunday, October 17, 2021

Reducing lawyers to a math formula – I love it!

https://link.springer.com/article/10.1007/s10506-021-09300-9

Contract as automaton: representing a simple financial agreement in computational form

We show that the fundamental legal structure of a well-written financial contract follows a state-transition logic that can be formalized mathematically as a finite-state machine (specifically, a deterministic finite automaton or DFA). The automaton defines the states that a financial relationship can be in, such as “default,” “delinquency,” “performing,” etc., and it defines an “alphabet” of events that can trigger state transitions, such as “payment arrives,” “due date passes,” etc. The core of a contract describes the rules by which different sequences of events trigger particular sequences of state transitions in the relationship between the counterparties. By conceptualizing and representing the legal structure of a contract in this way, we expose it to a range of powerful tools and results from the theory of computation. These allow, for example, automated reasoning to determine whether a contract is internally coherent and whether it is complete relative to a particular event alphabet. We illustrate the process by representing a simple loan agreement as an automaton.


(Related)

https://www.amazon.com/Digital-Lawyering-Technology-Practice-Century-ebook/dp/B09HVDZCNJ/ref=sr_1_1?dchild=1&keywords=Digital+Lawyering%3A+Technology+and+Legal+Practice+in+the+21st+Century&qid=1634475734&s=books&sr=1-1

Digital Lawyering: Technology and Legal Practice in the 21st Century

Digital technologies have already begun a radical transformation of the legal profession and the justice system. Digital Lawyering introduces students to all key topics, from the role of blockchain to the use of digital evidence in courtrooms, supported by contemporary case studies and integrated, interactive activities. The book considers specific forms of technology, such as Big Data, analytics and artificial intelligence, but also broader issues including regulation, privacy and ethics. It encourages students to explore the impact of digital lawyering upon professional identity, and to consider the emerging skills and competencies employers now require. Using this textbook will allow students to identify, discuss and reflect on emerging issues and trends within digital lawyering in a critical and informed manner, drawing on both its theoretical basis and accounts of its use in legal practice.


(Related)

https://www.taylorfrancis.com/chapters/edit/10.4324/9780429298219-7/using-artificial-intelligence-enhance-augment-delivery-legal-services-ann-thanaraj

Using artificial intelligence to enhance and augment the delivery of legal services

Artificial intelligence (AI) will redefine the legal profession, changing and evolving the role lawyers perform. New types of work will be available, including symbiotic new specialisms using a perfect blend of human expertise focusing on complex high-level advisory work supported by technology and its affordances. This chapter will explore how AI is already supporting the profession and what the future holds for our further collaboration, questioning the requisite set of skills, tools and assets needed to thrive in the changing legal world. It will also include an ethical exploration of the extent AI decision-making can impact our professional responsibilities and legal ethics and question the need to realign the fundamental tenets of professional ethics for law practice. AI will most certainly be at the forefront of how the legal evolves over the next 50 years.



As a retired auditor, this interests me. Perhaps we should train an AI to conduct audits like this?

https://www.axios.com/algorithmic-audits-ai-bias-a895bba4-05bb-4d6e-bd01-59c18627393d.html

AUDITS ATTEMPT TO CLEAN UP AI BIAS

AI algorithms employed in everything from hiring to lending to criminal justice have a persistent and often invisible problem with bias.

The big picture: One solution could be audits that aim to determine whether an algorithm is working as intended, whether it's disproportionately affecting different groups of people and, if there are problems, how they can be fixed.

Financial audits exist in part to open up the black box of a company's internal operations to outside investors, and ensure that a company remains in compliance with financial laws and regulations.

In the case of algorithmic audits, however, the actual workings of AI can be a black box to the company itself because unless explainability is built into the foundation of an algorithmic model, it can be easy to get lost.


(Related) I can easily foresee an AI vigilante. Think ‘Terminator.” (Think AIs that think of themselves as victims.)

https://academic.oup.com/ijlit/advance-article-abstract/doi/10.1093/ijlit/eaab008/6389717

AI ethical bias: a case for AI vigilantism (AIlantism) in shaping the regulation of AI

The debate on the ethical challenges of artificial intelligence (AI) is nothing new. Researchers and commentators have highlighted the deficiencies of AI technology regarding visible minorities, women, youth, seniors and indigenous people. Currently, there are several ethical guidelines and recommendations for AI. These guidelines provide ethical principles and humancentred values to guide the creation of responsible AI. Since these guidelines are non-binding, it has no significant effect. It is time to harness initiatives to regulate AI globally and incorporate human rights and ethical standards in AI creation. The government need to intervene, and discriminated groups should lend their voice to shape AI regulation to suit their circumstances. This study highlights the discriminatory and technological risks suffered by minority/marginalised groups owing to AI’s ethical dilemma. As a result, it recommends the guarded deployment of AI vigilantism to regulate the use of AI technologies and prevent harm arising from AI systems’ operations. The appointed AI vigilantes will comprise mainly persons/groups with an increased risk of their rights being disproportionately impacted by AI. It is a well-intentioned group that will work with the government to avoid abuse of powers.


(Related) True bias or simple math?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3940705

When a Small Change Makes a Big Difference: Algorithmic Fairness Among Similar Individuals

If a machine learning algorithm treats two people very differently because of a slight difference in their attributes, the result intuitively seems unfair. Indeed, an aversion to this sort of treatment has already begun to affect regulatory practices in employment and lending. But an explanation, or even a definition, of the problem has not yet emerged. This Article explores how these situations—when a Small Change Makes a Big Difference (SCMBDs)—interact with various theories of algorithmic fairness related to accuracy, bias, strategic behavior, proportionality, and explainability. When SCMBDs are associated with an algorithm’s inaccuracy, such as overfitted models, they should be removed (and routinely are.) But outside those easy cases, when SCMBDs have, or seem to have, predictive validity, the ethics are more ambiguous. Various strands of fairness (like accuracy, equity, and proportionality) will pull in different directions. Thus, while SCMBDs should be detected and probed, what to do about them will require humans to make difficult choices between social goals.



I told my AI to file may taxes, is it my fault if it didn’t?

https://ieeexplore.ieee.org/abstract/document/9564076

Delegation of moral tasks to automated agents The impact of risk and context on trusting a machine to perform a task

The rapid development of automation has led to machines increasingly taking over tasks previously reserved for human operators, especially those involving high-risk settings and moral decision making. To best benefit from the advantages of automation, these systems must be integrated into work environments, and into society as a whole. Successful integration requires understanding how users gain acceptance of technology by learning to trust in its reliability. It is thus essential to examine factors that influence the integration, acceptance, and use of automated technologies. As such, this study investigated the conditions under which human operators were willing to relinquish control, and delegate tasks to automated agents by examining risk and context factors experimentally. In a decision task, participants (N=43, 27 female) were placed in different situations in which they could choose to delegate a task to an automated agent or manual execution. The results of our experiment indicated that both, context and risk, significantly influenced people’s decisions. While it was unsurprising that the reliability of an automated agent seemed to strongly influence trust in automation, the different types of decision support systems did not appear to impact participant compliance. Our findings suggest that contextual factors should be considered when designing automated systems that navigate moral norms and individual preferences.



A useful idea?

https://msocialsciences.com/index.php/mjssh/article/view/1086

A Study on the Laws Governing Facial Recognition Technology and Data Privacy in Malaysia

The advancement of technology in the past decade has led humans to achieve many great things. Among that is facial recognition technology that uses a combination of two techniques which is face detection and recognition that is capable of converting facial images of a person into readable data and connecting it with other data sets which enable it to identify, track or compare it. This study delves into the usage of facial recognition technology in Malaysia where its regulation is almost non-existent. As its usage increases, the invasive features of this technology to collect and connect its data posed a threat to the data privacy of Malaysian citizens. Due to this issue, other countries' laws and policies regarding this technology are examined and compared with Malaysia. This enables the loopholes of the current law and policies to be identified and restructured, which create a clear path on the proper regulations and changes that need to be made. Thus, this study aims to analyse the limitation of law governing data privacy and its concept in Malaysia along with changes that need to be made. This study’s finding shows the shortcoming of Malaysia’s law in governing data privacy especially when it involves complex technology that has great data collection capability like facial recognition.



Somewhat a rant, but if it took them 20 years to notice the “surveillance state,” perhaps these aren’t the best observers.

https://www.salon.com/2021/10/16/after-20-years-its-time-to-repeal-the-patriot-act-and-begin-to-dismantle-the-surveillance-state/

After 20 years, it's time to repeal the Patriot Act and begin to dismantle the surveillance state



Another perspective on the inevitable?

https://www.eimj.org/uplode/images/photo/The_Copyright_Protection_of%C2%A0AI_Created_works_by_the_European_Union_Copyright_Legislation..pdf

The Copyright Protection of AI-Created works by the European Union Copyright Legislation

In the European Union, copyright law has increasingly focused on broadening the scope of works that have a right to intellectual property protection. Currently, the law applies to a variety of work categories, including literary works, music, film, and sound recordings, among others. Although breakthroughs in artificial intelligence (AI) continue to contribute to the emergence of machine-generated creative works, the European Copyright Legislation framework does not consider non-human discoveries. There are currently breakthroughs that allow autonomous programs to create products of significant monetary worth ranging from software to literary works, photographs, and music.

To that end, this research study critically examines the current EU copyright legislation in order to understand its position on copyright protections for AI-generated works. Furthermore, the study explains why AI-generated works should be protected and what legal tools should be improved to provide copyright protection.

According to the findings, these creations should be protected as incentive to developers and as a guarantee of technological advancement for the entire society.


No comments: