Is
our definition of Privacy narrowing or expanding.
A
Technical Look At The Indian Personal Data Protection Bill
The
Indian Personal Data Protection Bill 2019 provides a legal framework
for protecting personal data. It is modeled after the European
Union’s General Data Protection Regulation(GDPR). We present a
detailed description of the Bill, the differences with GDPR, the
challenges and limitations in implementing it. We look at the
technical aspects of the bill and suggest ways to address the
different clauses of the bill. We mostly explore cryptographic
solutions for implementing the bill. There are two broad outcomes of
this study. Firstly, we show that better technical understanding of
privacy is important to clearly define the clauses of the bill.
Secondly, we also show how technical and legal solutions can be used
together to enforce the bill.
For
my Computer Forensics students.
AI
Forensics: Did the Artificial Intelligence System Do It? Why?
In
an increasingly autonomous manner AI systems make decisions impacting
our daily life. Their actions might cause accidents, harm or, more
generally, violate regulations – either intentionally or not.
Thus, AI systems might be considered suspects for various events.
Therefore, it is essential to relate particular events to an AI, its
owner and its creator. Given a multitude of AI systems from multiple
manufactures, potentially, altered by their owner or changing through
self-learning, this seems non-trivial. This
paper discusses how to identify AI systems responsible for incidents
as well as their motives that might be “malicious by design”.
In addition to a conceptualization, we conduct two case studies
based on reinforcement learning and convolutional neural networks to
illustrate our proposed methods and challenges. Our cases illustrate
that “catching AI systems” seems often far from trivial and
requires extensive expertise in machine learning. Legislative
measures that enforce mandatory information to be collected during
operation of AI systems as well as means to uniquely identify systems
might facilitate the problem.
Even
if AI was a ‘person’ it wouldn’t have deep enough pockets…
Who
Pays for AI Injury?
Algorithms
hurt people everyday. Are such injuries just regrettable
externalities of technological progress that victims should be left
to bear? Or should someone else be civilly or criminally liable for
the injury? The law does not always provide an answer, which can
leave innocent victims holding the bag. Traditional doctrines
condition liability on faulty conduct from a human agent, but we are
now entering a phase of technological and economic progress where the
people involved might be doing everything they should, and it is the
machines that are misbehaving. The trouble is that machines are not
cognizable legal actors.
This
short paper turns to corporate law for a solution. While algorithms
are not legal actors, the corporations that develop and run them are.
The law should recognize that corporations act through the
algorithms over which they have beneficial control. Then the social
control that the law exercises over corporate harm would come to bear
on algorithmic harm too.
Another
backgrounder.
AI
101
… In
The AI 101 Report, Business Insider Intelligence, Business Insider's
premium research service, describes how AI works and looks its
present and potential future applications.
Perspective.
Amazon’s
Big Breakdown
… At
the online retailer, however, things were not going well. For many
shoppers, it was the first place to turn, but demand for certain
items was overwhelming the company’s ability to fulfill orders, not
just for panic buyers but in general. By March 17, Amazon had
suspended shipments to its warehouses of items that were not in
‘‘high demand,’’ scrambling, and often failing, to keep up
with orders for soap, sanitizers and face masks, as well as a wide
range of household staples, including food. By then, customers
looking for these items were, for the first time, experiencing an
Amazon that was conspicuously broken. Empty shelves in a supermarket
are self-explanatory. But on Amazon, customers were confronted with
failures that were much weirder and harder to understand, with, of
course, nobody around to explain them.
… In
other words, April 2020 wasn’t far off from where things might be
in 2025 or even 2030. When millions of people showed up five or 10
years too soon, Amazon’s systems weren’t ready to accommodate
them.
No comments:
Post a Comment