Like “SWATting” but with more
serious weapons?
https://www.wired.com/story/fake-warships-ais-signals-russia-crimea/
Phantom
Warships Are Courting Chaos in Conflict Zones
… According
to analysis
conducted by conservation
technology nonprofit SkyTruth
and
Global
Fishing Watch,
over 100 warships from at least 14 European countries, Russia, and
the US appear to have had their locations faked, sometimes for days
at a time, since August 2020. Some of these tracks show the warships
approaching foreign naval bases or intruding into disputed waters,
activities that could escalate tension in hot spots like the Black
Sea and the Baltic. Only a few of these fake tracks have previously
been reported,
and all share characteristics that suggest a common perpetrator.
Will the US
follow the EU, again?
https://researchportal.helsinki.fi/en/publications/damages-liability-for-harm-caused-by-artificial-intelligence-eu-l
Damages
Liability for Harm Caused by Artificial Intelligence – EU Law in
Flux
Artificial
intelligence (AI) is an integral part of our everyday lives, able to
perform a multitude of tasks with little to no human intervention.
Many legal issues related to this phenomenon have not been
comprehensively resolved yet. In that context, the question arises
whether the existing legal rules on damages liability are sufficient
for resolving cases involving AI. The EU institutions have started
evaluating if and to what extent new legislation regarding AI is
needed, envisioning a European approach to avoid fragmentation of the
Single Market. This article critically analyses the most relevant
preparatory documents and proposals with regard to civil liability
for AI issued by EU legislators. In addition, we discuss the
adequacy of existing legal doctrines on private liability in terms of
resolving cases where AI is involved. While
existing national laws on damages liability can be applied to
AI-related harm, the risk exists that case outcomes are unpredictable
and divergent, or, in some instances, unjust. The
envisioned level playing field throughout the Single Market justifies
harmonisation of many aspects of damages liability for AI-related
harm. In the process, particular AI characteristics should be
carefully considered in terms of questions such as causation and
burden of proof.
Who would have
thought politicians could be moral?
https://journals.aom.org/doi/abs/10.5465/AMBPP.2021.13567abstract
Moral
legitimisation in science, technology and innovation policies
Worldwide,
governments and institutions are formulating AI strategies that try
to square the aspiration of exploiting the potentials of machine
learning with safeguarding their communities against the perceived
ills of unchecked artificial systems. We make the claim that these
new class of documents are an interesting showcase for a recent turn
in policy work and formulation, that increasingly tries to intertwine
moral sentiment with strategic dimensions. This process of
moralizing is interesting and unprecedented coming from governmental
actors, as these documents
are guidance documents but not law. Given the significant
leeway in development trajectories of open meta-technologies such as
artificial intelligence, we
argue that these more moralizing elements within policy documents are
illustrative of a new class of policy writing, meant to catalyze and
shape public opinion and thus by proxy development trajectories.
(Related)
https://hrcak.srce.hr/ojs/index.php/eclic/article/view/18352
EU
LEGAL SYSTEM AND CLAUSULA REBUS SIC STANTIBUS
We are
witnesses and participants of Copernican changes in the world which
result in major crises/challenges (economic, political, social,
climate, demographic, migratory, MORAL) that significantly change
“normal” circumstances. The law, as a large regulatory system,
must find answers to these challenges.
… We
believe that the most current definition of law is that = law
is the negation of the negation of morality. It follows
that morality is the most important category of social development.
Legitimacy, and then legality, relies on morality. In other words,
the rules of conduct must be highly correlated with morality -
legitimacy - legality. What is legal follows the rules, what is
lawful follows the moral substance and ethical permissibility.
Therefore, only a fair and intelligent mastery of a highly
professional and ethical teleological interpretation of law is a
conditio sine qua non for overcoming current anomalies of social
development. The juridical code of legal and illegal is a
transformation of moral, legitimate and legal into YES, and immoral,
illegitimate and illegal into NO. The future of education aims to
generate a program for global action and a discussion on learning and
knowledge for the future of humanity and the planet in a world of
increasing complexity, uncertainty and insecurity.
Perhaps it
isn’t moral, but social?
https://arxiv.org/abs/2107.12977
The
social dilemma in AI development and why we have to solve it
While the
demand for ethical artificial intelligence (AI) systems increases,
the number of unethical uses of AI accelerates, even though there is
no shortage of ethical guidelines. We
argue that a main underlying cause for this is that AI developers
face a social dilemma in AI development ethics, preventing
the widespread adaptation of ethical best practices. We define the
social dilemma for AI development and describe why the current crisis
in AI development ethics cannot be solved without relieving AI
developers of their social dilemma. We argue that AI development
must be professionalised to overcome the social dilemma, and discuss
how medicine can be used as a template in this process.
There
may be a reflexive response to perceived threats to privacy?
https://arxiv.org/abs/2107.11029
User
Perception of Privacy with Ubiquitous Devices
Privacy is
important for all individuals in everyday life. With emerging
technologies, smartphones with AR, various social networking
applications and artificial intelligence driven modes of
surveillance, they tend to intrude privacy. This study aimed to
explore and discover various concerns related to perception of
privacy in this era of ubiquitous technologies. It employed online
survey questionnaire to study user perspectives of privacy.
Purposive sampling was used to collect data from 60 participants.
Inductive thematic analysis was used to analyze data. Our study
discovered key themes like attitude towards privacy in public and
private spaces, privacy awareness, consent seeking,
dilemmas/confusions related to various technologies, impact of
attitude and beliefs on individuals actions regarding how to protect
oneself from invasion of privacy in both public and private spaces.
These themes interacted amongst themselves and influenced formation
of various actions. They were like core
principles that molded actions that prevented invasion of privacy for
both participant and bystander. Findings of this study
would be helpful to improve privacy and personalization of various
emerging technologies. This study contributes to privacy by design
and positive design by considering psychological needs of users.
This is suggestive that the findings can be applied in the areas of
experience design, positive technologies, social computing and
behavioral interventions.
Just because
these articles on eliminating lawyers (and judges) amuse me.
https://repositorio.uautonoma.cl/handle/20.500.12728/9128
¿Judges
robots? Artificial intelligence and law
The increasing
application of artificial intelligence in our day to day, highlights
the extraordinary development of this technology. Thus, today we can
observe the use of the intelligence artificial for the assistance and
execution of tasks of the most diverse nature, including the legal
field. There are already in use different programs or platforms
nurtured with information of legal interest that allows solving legal
cases in a short time. The latter raises the possibility of going
one step further and incorporating robot judges for the
administration of justice, which raises as many concerns as
expectations, issues that this research analyze.
(Related)
https://lida.hse.ru/article/view/12791
On
the Prospects of Digitalization of Justice
The article
considers the problem of digitalization of judicial activities in the
Russian Federation and abroad. Given the fact that in the modern
world elements of digital (electronic) justice are gaining widespread
adoption, the article presents an analysis of its fundamental
principles and distinguishes between electronic methods of ensuring
procedural activity and digitalization of justice as an independent
direction of transformation of public relations at the present stage.
As a demonstration of the implementation of the first direction, the
article presents the experience of foreign countries, Russian
legislative approaches and currently being developed legislative
initiatives in terms of improving the interaction of participants in
the procedure through the use of information technologies. The
authors come to the conclusion that the implemented approaches and
proposed amendments are intended only to modernize the form of
administration of justice with new opportunities to carry out the
same actions (identification of persons participating in the case,
notification, participation in the court session, etc.) without
changing the essential characteristics of the proceedings. The
second direction, related to electronic (digital) justice, is
highlighted from the point of view of the prospects and risks of
using artificial intelligence technologies to make legally
significant decisions on the merits. At the same time,
the authors argue that the digitalization of justice requires the
development and implementation of the category of justice in
machine-readable law, as well as special security measures of both
technological and legal nature.
I’m going to
suggest that having a human in the loop slows down the decision
process and delays taking action. Does that not increase liability?
https://www.tandfonline.com/doi/full/10.1080/13600834.2021.1958860
Approaching
the human in the loop – legal perspectives on hybrid
human/algorithmic decision-making in three contexts
Public and
private organizations are increasingly implementing various
algorithmic decision-making systems. Through legal and practical
incentives, humans will
often need to be kept in the loop of such decision-making
to maintain human agency and accountability, provide legal
safeguards, or perform quality control. Introducing such human
oversight results in various forms of semi-automated, or hybrid
decision-making – where algorithmic and human agents interact.
Building on previous research we illustrate the legal dependencies
forming an impetus for hybrid decision-making in the policing, social
welfare, and online moderation contexts. We highlight the further
need to situate hybrid decision-making in a wider legal environment
of data protection, constitutional and administrative legal
principles, as well as the need for contextual analysis of such
principles. Finally, we outline a research agenda to capture
contextual legal dependencies of hybrid decision-making, pointing to
the need to go beyond legal doctrinal studies by adopting
socio-technical perspectives and empirical studies.
(Related) You
can never fully rely on the machine?
https://journals.aom.org/doi/abs/10.5465/AMBPP.2021.14636abstract
Artificial
Intelligence and Business Ethics: Goal Setting and Value Alignment as
Management Concerns
Rapid advances
in the development and use of artificial intelligence (AI) is having
a profound effect both on organizations and society at large. While
already used extensively in organizations to promote rational
decision making, its emergence is also giving rise to profound
ethical concerns. As such, we argue that organizational research has
a crucial role to play in promoting beneficial development and use of
this technology. Given the rapidly increasing autonomy and impact of
AI systems, we draw attention to the fundamental importance goal
setting and value alignment in determining the ethical desirability
of outcomes from this development. Importantly, we
claim that goal setting necessitates ethical considerations that are
not amenable to technology. Further, while the pursuit of
goals can be partially delegated to AI, challenges relating to the
representation of goals and (ethical) constraints imply that human
involvement is crucial in preventing unforeseen consequences.
Finally, we discuss issues relating to the malicious misuse and
heedless overuse of AI, arguing that the importance of human agency
and inclusion in decision making in fact increases with adoption of
the technology due to an escalating scale and impact of decisions
made. Given the profound impact of these decisions on numerous
stakeholders, we suggest that organizational research stands to
contribute to the relevance, comprehensiveness, and integrity of AI
ethics in the face of this revolutionary technology.
Microsoft is
being a bad boy? Imagine that!
https://www.theatlantic.com/ideas/archive/2021/07/microsofts-antitrust/619599/?scrolla=5eb6d68b7fedc32c19ef33b4
The
Invisible Tech Behemoth
… Microsoft
is the company that could truly test the Biden-era commitment to
anti-bigness, and, as a lawyer friend of mine put it, define the
limiting principle of new Federal Trade Commission Chair Lina Khan’s
antitrust theory. Since its own brush with antitrust regulation
decades ago, Microsoft has slipped past significant scrutiny. The
company is reluctantly guilty of the sin of bigness, yes, but it is
benevolent, don’t you see? Reformed, even! No need to cast your
pen over here!