Violating
privacy by algorithm?
https://www.theguardian.com/society/2021/nov/21/dwp-urged-to-reveal-algorithm-that-targets-disabled-for-benefit
DWP
urged to reveal algorithm that ‘targets’ disabled for benefit
fraud
Disabled
people are being subjected to stressful checks and months of
frustrating bureaucracy after being identified
as potential benefit fraudsters by an algorithm the
government is refusing to disclose, according to a new legal
challenge.
A
group in Manchester
has
launched the action after mounting testimony from disabled people in
the area that they were being disproportionately targeted for benefit
fraud investigations. Some said they were living in “fear of the
brown envelope” showing their case was being investigated. Others
said they had received a phone call, without explanation as to why
they had been flagged.
The
Department for Work and Pensions (DWP) has previously conceded that
it uses “cutting-edge artificial intelligence” to track possible
fraud but has so far rebuffed attempts to explain how the algorithm
behind the system was compiled. Campaigners say that once flagged,
those being examined can face an invasive and humiliating
investigation lasting up to a year.
Fertile
ground for recruiting Privacy Lawyers?
https://www.theregister.com/2021/11/20/in_brief_ai/
AI
surveillance software increasingly used to make sure contract lawyers
are doing their jobs at home
Contract lawyers are increasingly working under
the thumb of facial-recognition software as they continue to work
from home during the COVID-19 pandemic.
The
technology is hit-and-miss, judging from interviews with more than
two dozen American attorneys conducted
by
the Washington Post. To make sure these contract lawyers, who take
on short term-gigs, are working as expected and are handling
sensitive information appropriately, their every move is followed by
webcams.
The monitoring software is mandated by their
employers, and is used to control access to the legal documents that
need to be reviewed. If the system thinks someone else is looking at
the files on the computer, or equipment has been set up to record
information from the screen, the user is booted out.
(Related)
https://www.tribuneindia.com/news/jobs&careers/how-wearable-tech-can-reveal-your-performance-at-work-341035
How
wearable tech can reveal your performance at work
Not just keeping you fit and healthy, data from
fitness trackers and smart watches can also predict individual job
performances as workers travel to and from the office wearing those
devices, says a study.
Previous research on commuting indicates that
stress, anxiety, and frustration from commuting can lead to a less
efficient workforce and an increased counterproductive work
behaviour.
Researchers from Dartmouth College in the US built
mobile sensing machine learning (ML) models to accurately predict job
performance via data derived from wearable devices.
… "Compared to low performers, high
performers display greater consistency in the time they arrive and
leave work," said Pino Audia, a co-author of the study.
"This dramatically reduces the negative
impacts of commuting variability and suggests
that the secret to high performance may lie in sticking to better
routines." While high performers had physiological
indicators that are consistent with physical fitness and stress
resilience, low performers had higher stress levels in the times
before, during, and after commutes.
When laws conflict?
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3961863
Legal
Opacity: Artificial Intelligence’s Sticky Wicket
Proponents of artificial intelligence (“AI”)
transparency have carefully illustrated the many ways in which
transparency may be beneficial to prevent safety and unfairness
issues, to promote innovation, and to effectively provide recovery or
support due process in lawsuits. However, impediments to
transparency goals, described as opacity, or the “black-box”
nature of AI, present significant issues for promoting these goals.
An undertheorized perspective on opacity is legal
opacity, where competitive, and often discretionary legal choices,
coupled with regulatory barriers create opacity. Although legal
opacity does not specifically affect AI only, the combination of
technical opacity in AI systems with legal opacity amounts to a
nearly insurmountable barrier to transparency goals. Types
of legal opacity, including trade secrecy status, contractual
provisions that promote confidentiality and data ownership
restrictions, and privacy law independently and cumulatively make the
black box substantially opaquer.
The degree to which legal opacity should be
limited or disincentivized depends on the specific sector and
transparency goals of specific AI technologies, technologies which
may dramatically affect people’s lives or may simply be introduced
for convenience. This Response proposes a contextual approach to
transparency: Legal opacity may be limited in situations where the
individual or patient benefits, when data sharing and technology
disclosure can be incentivized, or in a protected state when
transparency and explanation are necessary.
Everything you ever wanted to know?
https://www.emerald.com/insight/content/doi/10.1108/S2398-601820210000008007/full/html
The Big
Data World: Benefits, Threats and Ethical Challenges
Advances in Big Data, artificial Intelligence and
data-driven innovation bring enormous benefits for the overall
society and for different sectors. By contrast, their misuse can
lead to data workflows bypassing the intent of privacy and data
protection law, as well as of ethical mandates. It may be referred
to as the ‘creep factor’ of Big Data, and needs to be tackled
right away, especially considering that we
are moving towards the ‘datafication’ of society,
where devices to capture, collect, store and process data are
becoming ever-cheaper and faster, whilst the computational power is
continuously increasing. If using Big Data in truly anonymisable
ways, within an ethically sound and societally focussed framework, is
capable of acting as an enabler of sustainable development, using Big
Data outside such a framework poses a number of threats, potential
hurdles and multiple ethical challenges. Some examples are the
impact on privacy caused by new surveillance tools and data gathering
techniques, including also group privacy, high-tech profiling,
automated decision making and discriminatory practices. In our
society, everything can be given a score and critical life changing
opportunities are increasingly determined by such scoring systems,
often obtained through secret predictive algorithms applied to data
to determine who has value. It is therefore essential to guarantee
the fairness and accurateness of such scoring systems and that the
decisions relying upon them are realised in a legal and ethical
manner, avoiding the risk of stigmatisation capable of affecting
individuals’ opportunities. Likewise, it is necessary to prevent
the so-called ‘social cooling’. This represents the long-term
negative side effects of the data-driven innovation, in particular of
such scoring systems and of the reputation economy. It is reflected
in terms, for instance, of self-censorship, risk-aversion and lack of
exercise of free speech generated by increasingly intrusive Big Data
practices lacking an ethical foundation. Another key ethics
dimension pertains to human-data interaction in Internet of Things
(IoT) environments, which is increasing the volume of data collected,
the speed of the process and the variety of data sources. It is
urgent to further investigate aspects like the ‘ownership’ of
data and other hurdles, especially considering that the
regulatory landscape is developing at a much slower pace than IoT and
the evolution of Big Data technologies. These are only
some examples of the issues and consequences that Big Data raise,
which require adequate measures in response to the ‘data trust
deficit’, moving not towards the prohibition of the collection of
data but rather towards the identification and prohibition of their
misuse and unfair behaviours and treatments, once government and
companies have such data. At the same time, the
debate should further investigate ‘data altruism’,
deepening how the increasing amounts of data in our society can be
concretely used for public good and the best implementation
modalities.
Perhaps an AI detective agency?
https://journals.sagepub.com/doi/abs/10.1177/20322844211057019
Legal
challenges in bringing AI evidence to the criminal courtroom
Artificial Intelligence (AI) is rapidly
transforming the criminal justice system. One of the promising
applications of AI in this field is the gathering and processing of
evidence to investigate and prosecute crime. Despite its great
potential, AI evidence also generates novel challenges to the
requirements in the European criminal law landscape. This study aims
to contribute to the burgeoning body of work on AI in criminal
justice, elaborating upon an issue that has not received sufficient
attention: the challenges triggered by AI evidence in criminal
proceedings. The analysis is based on the norms and standards for
evidence and fair trial, which are fleshed out in a large amount of
European case law. Through the lens of AI evidence, this
contribution aims to reflect on these issues and offer new
perspectives, providing recommendations that would help address the
identified concerns and ensure that the fair trial standards are
effectively respected in the criminal courtroom.
Next article should discuss how to find a jury of
AI peers.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3963422
The Legal
Quandary When AI Is The Criminal
One assumption about AI is that there will always
be a human held accountable for any bad acts that the AI perchance
commits. Some though question this assumption and emphasize that the
AI might presumably “act
on its own” or that it will veer
far from its programming or that the programmers
that created the AI will be impossible to identify. [Or the
programmers were themselves AI? Bob] A legal quandary is
ostensibly raised via the advent of such AI that goes criminally bad
(or was bad, to begin with).
Making new law…
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3963426
Core
Principles of Justice And Respective AI Impacts
A vital question worth asking is what will happen
to our venerated principles of justice due to the advent of AI in the
law. To grapple with that crucial matter, we first clarify the
precepts of justice to be considered and then stepwise analyze how AI
will impact each of them.
(Related)
https://scholar.law.colorado.edu/cgi/viewcontent.cgi?article=2500&context=articles
The Law of
AI
The question of whether new technology requires
new law is central to the field of law and technology. From Frank
Easterbrook’s “law of the horse” to Ryan Calo’s law of
robotics, scholars have debated the what, why, and how of
technological, social, and legal co-development and construction.
Given how rarely lawmakers create new legal regimes around a
particular technology, the EU’s proposed “AI Act” (Regulation
of the European Parliament and of the Council Laying Down Harmonised
Rules on Artificial Intelligence and Amending Certain Union
Legislative Acts) should
put tech-law scholars on high alert. Leaked early this
spring and officially released in April 2021, the AI Act aims to
establish a comprehensive European approach to AI risk-management and
compliance, including bans on some AI systems.
In Demystifying the Draft EU Artificial
Intelligence Act, Michael Veale and Frederik Zuiderveen Borgesius
provide a helpful and evenhanded entrée into this “world-first
attempt at horizontal regulation of AI systems.” One the one hand,
they admire the Act’s “sensible” aspects, including its
risk-based approach, prohibitions of certain systems, and attempts at
establishing public transparency. On the other, they note its
“severe weaknesses” including its reliance on “1980s product
safety regulation” and “standardisation bodies with no
fundamental rights experience.” For U.S. (and EU!) readers looking
for a thoughtful overview and contextualization of a complex and
somewhat inscrutable new legal system, this Article brings much to
the table at a relatively concise length
Obvious?
https://orbilu.uni.lu/bitstream/10993/48564/1/Blount%20RIDP%20PDF.pdf
APPLYING
THE PRESUMPTION OF INNOCENCE TO POLICING WITH AI
This paper argues that predictive policing, which
relies upon former arrest records, hinders the future application of
the presumption of innocence. This is established by positing that
predictive policing is comparable to traditional criminal
investigations in substance and scope. Police records generally do
not clarify whether former charges result in dismissal or acquittal,
or conversely, conviction. Therefore, police as state actors may
unlawfully act in reliance on an individual’s former arrest record,
despite a favourable disposition. Accordingly, it
is argued that the presumption of innocence as a fair trial right may
be effectively nullified by predictive policing.
(Related) The next step…
https://orbi.uliege.be/handle/2268/264969
The Use of
AI Tools in Criminal Courts: Justice Done and Seen To Be Done?
Artificial intelligence (hereafter: AI) is
impacting all sectors of society these days, including the criminal
justice area. AI has indeed become an important tool in this area,
whether for citizens seeking justice, legal practitioners or police
and judicial authorities. While there is already a large body of
literature on the prediction and detection of crime, this article
focuses on the current and future role of AI in the adjudication of
criminal cases. A distinction will be made between AI systems that
facilitate adjudication and those that could, in part or wholly,
replace human judges. At each step, we will give some concrete
examples and evaluate what are, or could be, the advantages and
disadvantages of such systems when used in criminal courts.
AI is never cruel…
https://lexelectronica.openum.ca/files/sites/103/La-justice-dans-tous-ses-%C3%A9tats_Michael_Lang.pdf
REVIEWING
ALGORITHMIC DECISION MAKING IN ADMINISTRATIVE LAW
Artificial intelligence is perhaps the most
significant technological shift since the popularization of the
Internet in the waning years of the 20th century.
Artificial intelligence promises to affect most parts of the modern
economy, from trucking and transportation to medical care and
research. Our legal system has already begun to contemplate how
artificially intelligent decision making systems are likely to affect
procedural fairness and access to justice. These effects have been
underexamined in the area of administrative law, in which
artificially intelligent systems might be used to expedite decision
making, ensure the relatively equal treatment of like cases, and ward
against discrimination. But the adoption of
artificially intelligent systems by administrative
decision makers also raises serious questions. This
essay focuses on one such question: whether the administrative
decisions taken by artificially intelligent systems are capable of
meeting the duty of procedural fairness owed to the subjects of such
decisions. This essay is arranged in three sections. In
the first, I briefly outline the increasing use of artificially
intelligent systems in the administrative context. I focus primarily
on machine learning algorithms and will describe the technical
challenge of inexplicability that they raise. In the second section,
I set out the duty of administrative decision makers to explain their
reasoning in certain contexts. In the third section, I argue that
administrative processes that use artificially intelligent systems
will likely complicate the effective discharge of this duty.
Individuals subject to certain kinds of administrative decisions may
be deprived of the reasons to which they are entitled. I argue that
artificial intelligence might prompt us to rethink reason giving
practices in administrative law.
Ethical medicine. Take two tablets and call me in
the morning?
https://ieeexplore.ieee.org/abstract/document/9597180
Regulatory
Framework of Artificial Intelligence in Healthcare
This paper provides an overview of the application
of artificial intelligence in healthcare and what it means in many
ways. These aspects will be the privacy that this new technology
offers us versus the availability of information that this technology
needs. We will also discuss the regulatory framework in the most
important areas of the world such as the United States and Europe,
comparing the laws and strategies that organizations have used to
preserve the security and control of artificial intelligence in
healthcare. As a consequence, we will expose the ethical challenges
posed by the entry of this new technology into our lives. We will
also place ourselves in the current framework of the situation of
artificial intelligence today, how it emerged, and its history over
the years. To summarize, some conclusions have been proposed to
conclude, and a personal opinion of the authors is about everything
discussed throughout the paperwork.
Backing into ethics?
https://ieeexplore.ieee.org/abstract/document/9611065
8 The
Ethics of Artificial Intelligence
Chapter Abstract: The dramatic theoretical and
practical progress of artificial intelligence in the past decade has
raised serious concerns about its ethical consequences. In response,
more than eighty
organizations have proposed sets of principles for ethical artificial
intelligence. The proposed principles overlap in their
concern with values such as transparency, justice, fairness, human
benefits, avoiding harm, responsibility, and privacy. But
no substantive discussion of how principles for ethical AI can be
analyzed, justified, and reconciled has taken place.
Moreover, the values assumed by these principles have received little
analysis and assessment. Perhaps issues about principles and values
can be evaded by Stuart Russell's proposal that beneficial AI
concerns people's preferences rather than their ethical principles
and values.
Tools & Techniques
https://www.makeuseof.com/tag/best-walkie-talkie-app/
The Best
Two-Way Walkie Talkie Apps for Android and iPhone