Sunday, November 13, 2022

It’s not that easy…

https://www.tandfonline.com/doi/full/10.1080/13501763.2022.2126515

Governing AI – attempting to herd cats? Introduction to the special issue on the governance of artificial intelligence

Artificial Intelligence raises new, distinct governance challenges, as well as familiar governance challenges in novel ways. The governance of AI, moreover, is not an issue of distant futures, it is well underway – and it has characteristics akin to ‘herding cats’ with a mind of their own. This essay introduces the contributions to the special issue, situating them in broader social science literatures. It then provides a sketch of an interdisciplinary research agenda. It highlights the limits of 'explainable AI', makes the case for considering AI ethics and AI governance simultaneously, identifies 'system effects' arising from the introduction of AI applications as an underappreciated risk, and calls for policymakers to consider both the opportunities and the risks of AI. Focusing on the (ab)uses of AI, rather than the highly complex, rapidly changing and hard-to-predict technology as such, might provide a superior approach to governing AI.





Subtle is as subtle does.” F. Gump

https://www.makeuseof.com/what-is-living-off-the-land-attack/

What Is a Living-Off-the-Land Attack and How Can You Prevent It?

A LotL attack is a kind of fileless attack where a hacker uses the programs already on a device instead of using malware. This method of using native programs is more subtle and makes discovering the attack less likely.

Some native programs hackers often use for LotL attacks include the command line console, PowerShell, the Windows registry console, and the Windows Management Instrumentation command line. Hackers also use Windows-Based and Console-Based Script hosts (WScript.exe and Cscript.exe). The tools come with every Windows computer and are necessary for executing normal administrative tasks.





Comings soon to a courtroom near you?

https://www.databreaches.net/uk-hacked-evidence-and-stolen-data-swamp-english-courts/

UK: Hacked evidence and stolen data swamp English courts

Franz Wild, Ed Siddons, Simon Lock, Jonathan Calvert, and George Arbuthnott report:

A multimillion-pound high court case between an authoritarian Gulf emirate and an Iranian-American businessman has revealed how hacked evidence is being used by leading law firms to advance their clients’ claims.
It includes allegations that a former Metropolitan Police officer hired Indian hackers and that lawyers from a top City firm held a secret “perjury school” in the Swiss Alps to prepare false witness testimonies about how they got hold of illegally obtained information.
Last week the Bureau of Investigative Journalism and the Sunday Times exposed the criminal activities of Aditya Jain, a 31-year-old computer security expert who set up a “hack-for-hire” operation from his apartment in Gurugram, India.

Read more at the Bureau of Investigative Journalism, keeping in mind this statement from the piece:

A striking feature of the English legal system is that a judge will accept hacked emails as evidence in court unless persuaded to exclude it. Peter Ashford, a London solicitor and expert in the admissibility of illegal evidence, claims the English system is “the most liberal”. He added: “Even if you’ve done the hacking, you’ve still got a pretty good chance of getting it in [to the court].”





AI judges for some things, but not all?

https://link.springer.com/chapter/10.1007/978-3-031-15746-2_14

Automated Justice: Issues, Benefits and Risks in the Use of Artificial Intelligence and Its Algorithms in Access to Justice and Law Enforcement

The use of artificial intelligence (AI) in the field of law has generated many hopes. Some have seen it as a way of relieving courts’ congestion, facilitating investigations, and making sentences for certain offences more consistent—and therefore fairer. But while it is true that the work of investigators and judges can be facilitated by these tools, particularly in terms of finding evidence during the investigative process, or preparing legal summaries, the panorama of current uses is far from rosy, as it often clashes with the reality of field usage and raises serious questions regarding human rights. This chapter will use the Robodebt Case to explore some of the problems with introducing automation into legal systems with little human oversight. AI—especially if it is poorly designed—has biases in its data and learning pathways which need to be corrected. The infrastructures that carry these tools may fail, introducing novel bias. All these elements are poorly understood by the legal world and can lead to misuse. In this context, there is a need to identify both the users of AI in the area of law and the uses made of it, as well as a need for transparency, the rules and contours of which have yet to be established.





This could make it easier to ‘sell’ AI ethics…

https://link.springer.com/chapter/10.1007/978-3-031-09846-8_13

Ethics Auditing: Lessons from Business Ethics for Ethics Auditing of AI

This chapter reviews the business ethics literature on ethics auditing to extract lessons for the emerging practice of ethics auditing of Artificial Intelligence (AI). It reviews the definitions, purposes and motivations of ethics audits, identifies their benefits as well as limitations, and compares various theoretical and practical approaches to ethics auditing. It distils seven lessons for the ethics auditing of AI and finds that ethics audits need to be comprehensive, involve stakeholders, entice behaviour change, be pragmatic and rigorous, be widely endorsed, fitting in context but also comparable, and lastly integrate a technical dimension with an organisational dimension. It is crucial that, while ethics auditing can also have financial benefits, their main goal must remain the improvement of the ethical performance and meaningful accountability of the audited organisation. The novel elements of AI should not blind us to the continuities of social embeddedness and organisational dynamics. Ethics auditing of AI can learn valuable lessons from failed and successful previous efforts to audit the ethics of organisations.





As opposed to ‘fair and legal’ discrimination. (Define one and you automatically define the other.)

https://link.springer.com/chapter/10.1007/978-3-031-17040-9_2

Unfair and Illegal Discrimination

There is much debate about the ways in which artificial intelligence (AI) systems can include and perpetuate biases and lead to unfair and often illegal discrimination against individuals on the basis of protected characteristics, such as age, race, gender and disability. This chapter describes three cases of such discrimination. It starts with an account of the use of AI in hiring decisions that led to discrimination based on gender. The second case explores the way in which AI can lead to discrimination when applied in law enforcement. The final example looks at implications of bias in the detection of skin colour. The chapter then discusses why these cases are considered to be ethical issues and how this ethics debate relates to well-established legislation around discrimination. The chapter proposes two ways of raising awareness of possible discriminatory characteristics of AI systems and ways of dealing with them: AI impact assessments and ethics by design.





‘cause I gots kulture.

https://www.makeuseof.com/tag/top-10-sites-listen-classical-music/

The Top 10 Sites To Listen To Classical Music



No comments: