Someone watches...
The
State of AI Ethics Report (June 2020)
With this
inaugural edition of the State of AI Ethics we hope to bring forward
the most important developments that caught our attention at the
Montreal AI Ethics Institute this past quarter. Our goal is to help
you navigate this ever-evolving field swiftly and allow you and your
organization to make informed decisions. This pulse-check for the
state of discourse, research, and development is geared towards
researchers and practitioners alike who are making decisions on
behalf of their organizations in considering the societal impacts of
AI-enabled solutions. We
cover a wide set of areas in this report spanning Agency and
Responsibility, Security and Risk, Disinformation, Jobs and Labor,
the Future of AI Ethics, and more. Our staff has worked
tirelessly over the past quarter surfacing signal from the noise so
that you are equipped with the right tools and knowledge to
confidently tread this complex yet consequential domain.
As we made the
CFO responsible for the accuracy of financial reporting?
Could
regulating the creators deliver trustworthy AI?
Is a new
regulated profession, such as Artificial
Intelligence (AI) Architect who is responsible and
accountable for AI outputs necessary to ensure trustworthy AI? AI is
becoming all pervasive and is often deployed in everyday
technologies, devices and services without our knowledge. There is
heightened awareness of AI in recent years which has brought with it
fear. This fear is compounded by the inability to point to a
trustworthy source of AI, however even the term "trustworthy AI"
itself is troublesome. Some consider trustworthy AI to be that which
complies with relevant laws, while others point to the requirement to
comply with ethics and standards (whether in addition to or in
isolation of the law). This immediately raises questions of whose
ethics and which standards should be applied and whether these are
sufficient to produce trustworthy AI in any event.
When
you think, ‘That’s not right?’
Contestable
Black Boxes
The
right to contest a decision with consequences on individuals or the
society is a well-established democratic right. Despite this right
also being explicitly included in GDPR in reference to automated
decision-making, its study seems to have received much less attention
in the AI literature compared, for example, to the right for
explanation. This paper investigates the type of assurances that are
needed in the contesting process when algorithmic black-boxes are
involved, opening new questions about the interplay of contestability
and explainability. We argue that specialised complementary
methodologies to evaluate automated decision-making in the case of a
particular decision being contested need to be developed. Further,
we propose a combination of well-established software engineering and
rule-based approaches as a possible socio-technical solution to the
issue of contestability, one of the new democratic challenges posed
by the automation of decision making.
Further
definitions required.
ISSUES
OF CONSTRUCTION OF LEGAL DEFINITIONS IN THE FIELD OF ARTIFICIAL
INTELLIGENCE
The study of
the problems of the formation of the conceptual apparatus in the
field of legal support of artificial intelligence to develop
effective legal solutions in order to regulate new digital
technologies. The work is based on a combination of general
scientific and special legal methods, including analysis,
description, generalization, comparative law. The formation of legal
definitions of artificial intelligence and related concepts (robot,
cyberphysical system, etc.) requires the identification of the main
legal features of artificial intelligence. The
following key characteristics of artificial intelligence are
identified: optional hardware implementation; the ability of the
system to analyze the environment; autonomy in operation; the ability
to accumulate experience, its assessment and implementation of the
task of self-learning; the presence of "intelligence",
described through the categories of "rationality",
"rationality" or simply the ability to "think like a
person" or "act like a person" in all or in narrowly
defined circumstances.
A clear
downside?
Artificial
Intelligence and Copyright Law in a European context - A study on the
protection of works produced by AI-systems
This master
thesis discusses current copyright rules and if there is presently a
copyright protection for these types of works. There is a
possibility to protect works that have been generated by AI’s.
However, this is only possible if a human is using the AI as a
“tool”, in order to reach a certain end-goal. There has to be a
clear link between the human author and the machine, otherwise
neither authorship nor originality can be established. Ultimately,
in a scenario where such a
link is missing, the work would fall into public domain.
The
beforementioned is followed by possible solutions for protecting
these works in the future. In fact, it is interesting to look into
the legislation of countries such as the UK, the US and EU Member
States in order to study their ways of protecting similar types of
works. These solutions will treat topics such as AI as an
“employee”, the UK concept of computer generated works and the
attribution of legal personhood to AI-systems. Additionally, there
might be a need for changing the structure of the current EU
copyright rules, namely by lowering the thresholds for protection, in
order to widen the possibilities to give copyright protection to
AI-generated works.
Ultimately,
the thesis finds that the best way of protecting AI-generated works
would be, to develop a new sui generis rule for AI-generated works.
This solution is the most likely to see the day, as it is a flexible
and easy way of attributing copyright protection without changing and
lowering the traditional copyright thresholds for protection.
No comments:
Post a Comment